JP3451221B2  Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium  Google Patents
Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and mediumInfo
 Publication number
 JP3451221B2 JP3451221B2 JP20719599A JP20719599A JP3451221B2 JP 3451221 B2 JP3451221 B2 JP 3451221B2 JP 20719599 A JP20719599 A JP 20719599A JP 20719599 A JP20719599 A JP 20719599A JP 3451221 B2 JP3451221 B2 JP 3451221B2
 Authority
 JP
 Japan
 Prior art keywords
 code
 error correction
 symbol
 intermediate
 symbols
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Expired  Fee Related
Links
 238000007476 Maximum Likelihood Methods 0.000 claims description 36
 230000000875 corresponding Effects 0.000 claims description 28
 230000001172 regenerating Effects 0.000 claims description 10
 238000000034 methods Methods 0.000 claims description 8
 230000008929 regeneration Effects 0.000 claims description 4
 238000000605 extraction Methods 0.000 claims 1
 238000004364 calculation methods Methods 0.000 description 7
 239000000470 constituents Substances 0.000 description 4
 239000000654 additives Substances 0.000 description 1
 230000000694 effects Effects 0.000 description 1
Description
[0001]
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium, and more particularly to a novel error correction code configuration having excellent decoding error rate characteristics.
[0002]
2. Description of the Related Art Generally, a block error correction code is represented by a subspace C of an ndimensional vector space whose elements are elements on a Galois field GF (q). The operation of mapping the element (information vector) m belonging to the kdimensional vector space on the Galois field GF (q) to the element (codeword) c belonging to the block error correction code C on a onetoone basis is called the coding of the error correction code C. This block error correction code is called (n, k) code. Error correction code C
A device that performs the encoding of 1 is referred to as an encoder (encoding device) for the error correction code C. When transmitting the codeword c of the block error correction code C, an ndimensional vector e occurs as an error,
## EQU00001 ## When an ndimensional vector (reception vector) such that r = c + e (1) is received,
The error vector e is estimated, and the codeword c is calculated from the received vector r.
The operation of extracting is called decoding of the error correction code C. A device that decodes the error correction code C is called a decoder (decoding device) for the error correction code C. Further, the minimum value of the Hamming distance between the code words of the (n, k) block error correction code C is called the minimum distance d. The maximum value of the weight of the error vector e that allows the codeword c to be extracted from the reception vector r shown in Expression (1) is determined by d.
Conventionally, various block error correction codes have been devised. For cyclic codes such as Hamming code, BCH code, and ReedSolomon code, which are typical block error correction codes, the code is performed by operating the encoder 100 once on the information vector m, as shown in FIG. The conversion is completed and the code word c of the code C is generated.
On the other hand, for a code which is a combination of codes to form a new code, for example, a concatenated code, a product code, or a superposition code, as shown in FIG. 1 to 102J are operated to sequentially perform encoding by the codes C _{1 to} C _{J} ,
As a whole, the code word c of the code C can be obtained.
FIG. 13 shows an example of another encoder that realizes the conventional encoding method. This example relates to a concatenated code in which the outer code is a ReedSolomon (15,11) code and the inner code is a Hamming (7,4) code. In the figure, m is an information symbol, c is a code word, (m _{0} , ...,
m _{10} ) is the divided information vector, (c _{0} , ..., C _{14} ).
Is a ReedSolomon codeword, 106 is an information vector division unit, 108 is a ReedSolomon (15,11) code encoder, and 110 is a Hamming (7,4) code encoder.
The operation of this encoder will be described. First, the information vector m is divided by the information vector division unit 106 into blocks (m _{0} , ..., M _{10} ) having a length of 4 bits. These 4bit length blocks are input to the ReedSolomon (15,11) code encoder 108, and the ReedSolomon (15,11) codewords (c _{0} , ...,
c _{14} ). Each symbol c _{j} (j = 1 to _{14} ) of these ReedSolomon codewords (c _{0} , ..., C _{14} ).
Is input to the encoder 110 of the Hamming (7,4) code and encoded into the code word c of the concatenated code.
On the other hand, the conventional block error correction code decoding method includes a limit distance decoding method for performing decoding using a symbol (harddecision symbol) sequence R having a fixed value for each symbol, and a reception likelihood given for each symbol. It is roughly classified into a maximum likelihood decoding method that performs decoding using the degree sequence θ.
The former limit distance decoding method is a certain code word c
Is a decoding method in which all received vectors r that are at a Hamming distance t or less are decoded into a codeword c. However, t is the number of correctable errors determined by the minimum distance d of the block error correction code. The limit distance decoding method can be executed by algebraic calculation, and the circuit scale when it is realized can be suppressed to be small. Euclidean decoding method and BerlekampMassie decoding method are known as typical wellknown limit distance decoding methods.
On the other hand, the latter maximum likelihood decoding method is a decoding method for estimating the code word c that maximizes the conditional probability P (r  c) for the received vector r. Since the maximum likelihood decoding method generally calculates the conditional probability P (r  c) for all codewords, the circuit scale becomes large. However, it has a feature that it is superior in decoding error rate characteristics as compared with the abovementioned limit distance decoding method. There are two known wellknown maximum likelihood decoding methods: a method using a codeword table and a Wolf's method using a trellis.
FIG. 14 shows an example of a conventional decoder that implements the limit distance decoding method. In the figure, R is a hard decision symbol sequence, s is a syndrome, e is an estimated error vector, c'is an estimated codeword, 112 is a syndrome calculation unit, 114 is a Euclidean decoder, 116 is EXOR (exclusive OR). Is. In this decoder. The syndrome calculation unit 112 calculates the syndrome s from the received hard decision symbol string R. Using this syndrome s, the error vector e is estimated by the Euclidean decoder 114 using the Euclidean decoding method which is a typical limit distance decoding method. The estimated error vector e is EXOR1
16 performs exclusive OR processing with the harddecision symbol sequence R to obtain the estimated code word c ′.
Further, FIG. 15 shows an example of a conventional decoder which realizes the maximum likelihood decoding method using a code word table. In the figure,
θ is a reception likelihood sequence, U is a decision variable, and 1201 to 120
M is a correlator, 1221 to 122M are codeword tables, 124 is a maximum value determination unit, and 126 is a codeword selector. In this decoder, the correlation value between the reception likelihood sequence θ and the codewords c _{0 to} C _{M1} stored in the codeword tables 1221 to 122M is the correlators 1201 to 120M.
Calculated by Then, the maximum value of the calculated correlation values is calculated by the maximum value judgment unit 124, and the value becomes the judgment variable U. The code word selector 126 follows the code word table 122 according to the code word index giving the decision variable U.
Any _{one} of the code words c _{0 to} C _{M1} stored in −1 to 122M is selected, and the selected code word is set as the estimated code word c ′.
[0012]
It is known that the maximum likelihood decoding method is currently the decoding method having the best decoding error rate characteristic when the communication is performed using the (n, k) block error correction code. ing.
Among known methods for realizing the maximum likelihood decoding method,
Like the decoder shown in FIG. 15, the codeword table 122
In the method using 1122M, the total size of the codeword table 122 to be prepared is proportional to 2 ^{k} . Although not particularly shown, even in the Wolf method using a trellis, the total number of state holding registers to be prepared inside the decoder is proportional to 2 ^{nk} .
Therefore, when the code length n is large and the number of information symbols k is close to n / 2, that is, the transmission rate k /
In the case of n≈1 ^{/ 2} , it is necessary to prepare a table or a register having a size approximately proportional to 2 ^{n / 2} by using any known method for realizing maximum likelihood decoding. However, even if this numerical value is a short code with a code length n = 100, it is 2 ^{50}
Therefore, it is very difficult to implement the maximum likelihood decoding method for such an (n, k) block error correction code.
Therefore, conventionally, in order to realize a decoding device for (n, k) block error correction codes in which k / n≈1 / 2, the limit distance decoding method must be adopted, which is the decoding error rate. It was one of the causes of the deterioration of the characteristics.
The present invention has been made in view of the above problems, and an object thereof is an error correction coding apparatus capable of improving the decoding error rate characteristic without requiring a large size table or register. A method and medium, and an error correction code decoding apparatus, method, and medium are provided.
[0017]
(1) In order to solve the above problems, an error correction coding apparatus according to the present invention is an error correction coding method for error correcting coding an information symbol to generate an error correction codeword. An apparatus, comprising: an information symbol dividing means for dividing a plurality of original symbols constituting the information symbol into a first symbol group and a second symbol group; and encoding the first symbol group with a predetermined error correction code. , An intermediate codeword generating means for generating an intermediate codeword composed of a plurality of intermediate symbols, and a code for converting each intermediate symbol forming the intermediate codeword into code selection information designating one of a predetermined code group The selection information generating means and the second symbol group are reconfigured into the same number of symbols as the intermediate symbols, each of them is encoded with a predetermined code, and a plurality of code word selection information corresponding to each of the intermediate symbols is generated. Codeword selection information generating means for,
For each of the intermediate symbols, one of the predetermined code groups is selected based on the corresponding code selection information, and one of the selected codes is selected based on the corresponding code word selection information. Codeword selecting means for selecting a codeword, and error correction codeword generating means for generating the error correction codeword based on the codeword selected for each of the intermediate symbols. .
Further, the error correction coding method according to the present invention is an error correction coding method for error correcting coding an information symbol to generate an error correction code word, wherein a plurality of original symbols forming the information symbol are used. Is divided into a first symbol group and a second symbol group, and the first symbol group is encoded by a predetermined error correction code to generate an intermediate code word composed of a plurality of intermediate symbols. An intermediate code word generating step, a code selection information generating step of converting each intermediate symbol forming the intermediate code word into code selection information designating one of a predetermined code group, and the second symbol group having the intermediate symbol And a code word selection information generation step for generating a plurality of code word selection information corresponding to each of the intermediate symbols by reconstructing the same number of symbols as the above and encoding each with a predetermined code. And, for each of the intermediate symbols, selects one code from the predetermined code group based on the corresponding code selection information, and based on the corresponding codeword selection information, from the selected code A codeword selecting step of selecting one codeword, and an error correction codeword generating step of generating the error correction codeword based on the codeword selected for each of the intermediate symbols. And
Further, the medium according to the present invention is a medium in which a program for causing a computer to function as an error correction coding device for error correction coding an information symbol to generate an error correction code word is recorded. An information symbol dividing means for dividing a plurality of original symbols constituting a symbol into a first symbol group and a second symbol group, and a plurality of intermediate symbols obtained by encoding the first symbol group with a predetermined error correction code. Intermediate codeword generating means for generating an intermediate codeword, and code selection information generating means for converting each intermediate symbol forming the intermediate codeword into code selection information designating one of a predetermined code group, The second symbol group is reconfigured into the same number of symbols as the intermediate symbols, each of which is encoded with a predetermined code, and a plurality of code word selection information corresponding to each of the intermediate symbols. Codeword selection information generating means for generating, and for each of the intermediate symbols, select one code from the predetermined code group based on the corresponding code selection information, based on the corresponding codeword selection information Codeword selecting means for selecting one codeword from the selected code, and error correction codeword generation for generating the error correction codeword based on the codeword selected for each of the intermediate symbols. A program for operating the computer as a means is recorded.
According to the present invention, a plurality of original symbols forming an information symbol are divided into a first symbol group and a second symbol group, of which the first symbol group is encoded by a predetermined error correction code and the intermediate symbol A codeword is generated. On the other hand, the second symbol group is reconfigured into the same number of symbols as the intermediate symbols, each of which is encoded with a predetermined code, and this is codeword selection information corresponding to each intermediate symbol. Further, each of the plurality of intermediate symbols forming the intermediate codeword is converted into code selection information. Then, for each intermediate symbol, a code (a set of code words) is selected by the code selection information, and a specific code word is selected by the code word selection information. Then, an error correction codeword is generated based on all the selected codewords.
(2) Next, the error correction code decoding apparatus according to the present invention includes a received word provisional generating means for provisionally generating a received word composed of a plurality of received symbols based on a reception sequence, and the received symbol. Intermediate for converting each into a decoding processing target symbol in a predetermined error correction code and generating an intermediate reception word composed of the same number of intermediate symbols as the reception symbols, each intermediate symbol being the decoding processing target symbol Received word generation means, error correction code decoding means for decoding the intermediate received word with the predetermined error correction code, and each intermediate symbol forming the decoded intermediate received word is designated one of a predetermined code group. Code selection information generating means for converting into code selection information, and each of the received symbols from the codes specified by the code selection information based on the reception sequence. Received word regenerating means for selecting and regenerating the received word, and an information symbol for extracting an information symbol included in the received sequence based on the decoded intermediate received word and the regenerated received word. And an extracting unit.
Further, the error correction code decoding method according to the present invention comprises: a received word provisional generating step for temporarily generating a received word composed of a plurality of received symbols based on a received sequence; and a predetermined error for each of the received symbols. An intermediate received word generating step of converting into a decoding process target symbol in a correction code and generating an intermediate received word composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding process target symbol And an error correction code decoding step of decoding the intermediate received word with the predetermined error correction code, and converting each intermediate symbol forming the decoded intermediate received word into code selection information for designating one of a predetermined code group. A step of generating code selection information to be converted, and based on the reception sequence, the reception symbol from the codes designated by the code selection information. A received word regeneration step of regenerating the received word by selecting each of the received words, and an information symbol included in the received sequence based on the decoded intermediate received word and the regenerated received word. And a step of extracting an information symbol to be extracted.
Also, the medium according to the present invention includes a received word provisional generating means for provisionally generating a received word composed of a plurality of received symbols based on a reception sequence, and decoding each of the received symbols in a predetermined error correction code. Intermediate received word generating means for converting into intermediate processing words to be processed and generating intermediate decoding words composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding processing target symbol; Error correction code decoding means for decoding a received word by the predetermined error correction code, and code selection for converting each intermediate symbol constituting the decoded intermediate received word into code selection information designating one of a predetermined code group. An information generating means, and selecting each of the received symbols from the codes specified by the code selection information based on the received sequence, A computer is caused to function as a received word regenerating unit for regenerating, and an information symbol extracting unit for extracting an information symbol included in the reception sequence based on the decoded intermediate received word and the regenerated received word. Have recorded a program for.
According to the present invention, the received word is tentatively generated based on the received sequence, each of the received symbols constituting the received word is converted into the decoding processing target symbol in the predetermined error correction code, and the decoding processing target symbol is It is an intermediate symbol that constitutes an intermediate received word. The decoding target symbol is
For example, it can be composed of a union of a symbol that is an element of a predetermined Galois field of a predetermined error code and an erasure symbol. Then, the intermediate received word is decoded by the predetermined error correction code. The intermediate symbols forming the intermediate received word, which is the decoding result, are each converted into code selection information. The code selection information is information that specifies one of the predetermined code groups. Then, based on the reception sequence, a specific code word is selected from the code specified by the code selection information corresponding to each reception symbol, and the reception word is regenerated based on that. here,
The term "based on the reception sequence" broadly includes the case based on information derived from the reception sequence. Specifically, it also includes a case based on a likelihood sequence generated based on a received sequence and a case based on a received word.
According to the present invention, when the received word is regenerated, it is sufficient to select a specific code from the codes specified by the code selection information. Therefore, the predetermined error correction code is selected so that the code selection information is correctly determined. By selecting, the decoding error rate characteristic can be improved.
Further, since the length of the received symbol, that is, the code length of each of the predetermined code groups can be shortened, the maximum likelihood decoding using a code word table, for example, is performed at the time of regeneration of the abovementioned received word. When trying to use the same method as the case, the number of codeword tables to be held can be reduced. Therefore, it is easy to use the same method as the case of the maximum likelihood decoding for the regeneration of the received word described above, and in this case, the decoding error rate characteristic can be further improved.
(3) Further, the error correction coding method according to the present invention uses the predetermined Galois field G of (V _{0} , V _{1} , ..., V _{N1} ).
Each symbol V _{i} of the code word V of the predetermined error correction code C _{s} on F (q) is an _{m} dimensional vector on a predetermined Galois field GF (p) of (u _{0} , u _{1} , ..., U _{m1} ). An error correction coding method for forming a codeword of an error correction code by mapping to an element belonging to a subset {u} of V _{i} (i = 0,
, ..., N−1) is an element of the Galois field GF (q), u _{i}
(I = 0, 1, ..., M1) is the Galois field GF (p)
, M is a positive integer, and the order of the subset {u} is equal to the element number q of the Galois field GF (q). By doing so, it is possible to improve the decoding error rate characteristic by appropriately selecting a subset {u} of mdimensional vectors and increasing the minimum distance between the mdimensional vectors, for example.
In this case, the code word V may be generated based on a part of the information symbol, and the mapping destination of the symbol V _{i} may be determined based on the rest of the information symbol. In this way, a part of the information symbols can be encoded when mapping to the subset {u}.
Further, the subset {u} is f _{0} ∪f _{1} ∪
... ∪F determined by _{H1,} select one of the error correction code f _{0} to f _{H1} based on each symbol V _{i,} based on the remainder of the information symbols, the error correction code f is the selected _{j}
One of the code words belonging to (j = 0, 1, 2, ..., H1) may be selected as the mapping destination. Here, f _{0} and f _{i} = f _{0} + w _{i} (i = 1, 2, ..., H−1) are error correction codes on the Galois field GF (p), and w _{i} is f _{i} ∩f
_{j} ≠ {φ} (i ≠ j; i, j = 0,1,2, ..., H
1) An mdimensional vector on the Galois field GF (p) defined to be 1). Also, {φ} is an empty set.
By doing so, when one of the code words belonging to any of the error correction codes f _{j} (j = 0, 1, 2, ..., H1) is selected based on the remaining part of the information symbol, for example, an error is generated. First, one of the code words belonging to the correction code f _{0} is selected, and w _{i} is added to the selected code word to obtain a desired error correction code f _{j} (j = 0, 1, 2, ..., H Regarding 1), one of the code words can be easily selected.
(4) Further, the error correction code decoding method according to the present invention is based on the received word r of (r _{0} , r _{1} , ..., R _{N1} ) and r _{i} (i = 0, 1, ... , N1) is mapped to the union of the predetermined Galois field GF (q) and the erasure symbol {ε} to generate an Ndimensional vector of (R _{0} , R _{1} , ..., _{RN1} ), Let the Ndimensional vector be the Galois field GF
(Q) Decode with the predetermined error correction code C _{s} above,
(V _{0} ', V _{1} ', ..., V _{N1} ') is generated. Where r _{i} (i = 0,
1, ..., N−1) are mdimensional vectors on the Galois field GF (p). By doing this, (V _{0} , V _{1} , ...,
V _{N1)} becomes the Galois field GF (q) a predetermined error correction code C _{s} each symbol V _{i} codeword V of the on the _{(u 0, u 1, ...} ,
u _{m1} ), if a code word is configured by being mapped to an element belonging to a subset {u} of the mdimensional vector on the Galois field GF (p), the received word r corresponding to the code word Can be appropriately decoded.
In this case, the reception likelihoods (θ _{0} , θ _{1} , ..., θ _{N1} ) corresponding to the reception word r are acquired, and the reception likelihoods (θ _{0} , θ _{1} , ..., θ _{N). 1} ), each r _{i} (i = 0,
, 1, ..., N1) is assumed to belong to a subset {u} of mdimensional vectors on a predetermined Galois field GF (p) in which (u _{0} , u _{1} , ..., U _{m1} ) Maximum likelihood decoding for _{i} may be performed. Where θ _{i} (i = 0, 1, ...,
N1) corresponds to each symbol V _{i} ′ of the estimated code word V ′, and is an mdimensional vector indicating to what value the values of these symbols V _{i} ′ are close. For example, an mdimensional vector having real numbers as elements can be used as the θ _{i} . By doing this, maximum likelihood decoding can be performed for each r _{i} , maximum likelihood decoding can be performed without the need for a large table or register, and decoding error rate characteristics can be improved.
Further, the subset {u} is converted into f _{0} ∪f _{1} ∪ ...
∪ f _{H1} and each V _{i} '(i = 0, 1, 2, ..., N
Wherein selecting one of the error correction code f _{0} to f _{H1} based on the 1), r _{i} corresponding to the V _{i} is selected error correcting code f _{j (j} = 0,1,2 , ..., N−1), the maximum likelihood decoding may be performed for r _{i} . Where f _{0} and f _{i} = f _{0}
_{+ W i (i = 1,2,} ..., H1) is the Galois field GF
(P) is an error correction code, and w _{i} is f _{i} ∩f _{j} ≠
It is an mdimensional vector on the Galois field GF (p) defined so that {φ} (i ≠ j; i, j = 0, 1, 2, ..., H1). In this way, the error correction code f _{i} (i =
Since there are relatively few codewords belonging to 1, 2, ..., H1), maximum likelihood decoding can be easily realized with a small amount of information processing without using a large table or register.
[0033]
BEST MODE FOR CARRYING OUT THE INVENTION Preferred embodiments of the present invention will now be described in detail with reference to the drawings.
Here, as one embodiment of the present invention,
A code to which the maximum likelihood decoding method can be easily applied and which has an excellent decoding error rate characteristic as compared with the conventional block error correction code (herein, particularly referred to as “code K _{I} ”) will be described.
For simplicity, only the extension field GF (q) with q = 2 ^{m} will be described below, but the present invention is similarly applicable to q = p ^{m} (p is a prime number).
A. Principle (1) Configuration code K _{I} code K _{I} is composed of the upper code (Supervising code) and a plurality of members code (Member code). The upper code is generally a symbol error correction code, and its code length is N,
Let K be the number of information symbols and m be the symbol length. Generally, the codeword of the upper code is
[Expression 2] V = (V _{0} , V _{1} , ..., V _{N−1} ) (2) When the upper code is a systematic code, the first K symbols are information symbols and the latter NK symbols are check symbols.
Next, the member code f _{j} (j = 0 to H
1) is a code having a code length n and the number of information symbols k, and generally,
F _{j} = C _{m} + w _{j} (0 ≦ j ≦ H−1) It is expressed by (3). C _{m} is the primary member code
de), and either linear code or nonlinear code can be used. When the main member code is a linear code, each f _{j} expressed by the equation (3) is equivalent to the coset by C _{m} , and when the main member code is a nonlinear code, each f _{j} expressed by the equation (3) f _{j} is equivalent to translate by C _{m} . The relationship between the main member code and the member code is shown in FIG.
The representative vector w _{j} representing the class to which the member code f _{j} belongs and the symbol V _{i of the} upper code must be selected so that they can be mutually converted by an appropriate mapping. That is, between the representative vector of the member code and the symbol of the upper code,
[Equation 4] A onetoone mapping φ from the Galois field GF (2 ^{m} ) to the set W of the representative vector w _{j} of the member code, and
[Equation 5] It is necessary that there exists a mapping ψ on the union GF (2 ^{m} ) ∪ {ε} of the Galois field GF (2 ^{m} ) and the erasure symbol {ε} from the ndimensional vector space V ^{n} . However, V'is
[Equation 6] Shall be calculated in.
Finally, the relationship between the upper code and the member code will be described. A part of the information symbol is encoded by the main member code C _{m} , and the main member code word string is obtained. The remaining information symbols are encoded by the upper code, and the representative vector w _{j} corresponding to the symbol encoded by the upper code is mapped φ.
Selected by. By sequentially adding this vector to the obtained codeword of the main member code, a member codeword string is obtained, and this is taken as the codeword of code K _{I.} This state is shown in FIG.
From the above, the code word of the code K _{I} is
## EQU7 ## K _{I} ≡ {x = (y _{0} , y _{1} , ..., y _{N1} ): y _{i} ∈f _{ji} } (7) f _{ji} = C _{m} + w _{ji} (8) w _{ji} = φ (V _{i} ) (9) V = (V _{0} , V _{1} , ..., V _{N1} ) It is defined by (10). Where f _{ji} is the member code and w _{ji} is f
_{is} a representative vector of _{ji} , and V is a code word of the upper code. Therefore, the code length n ′ of the code K _{I} and the number of information symbols k ′ are respectively
[Equation 8] n ′ = n · N (11) k '= k * N + m * K (12) It is represented by.
(2) Example of Encoding Procedure An example of the encoding algorithm of the code K _{I} will be given below.
First, in the first step, a k'bit information symbol is divided into kN bits (second symbol group) and mK bits (first symbol group). In the second step, k
With respect to N bits, encoding is performed by the main member code C _{m for} every k bits to obtain N code words (code word selection information). In the third step, mK bits are subjected to upper coding (error correction coding) by using them as mbit K symbols. As a result, a check symbol is added and expanded into mbit N symbols (intermediate symbols; here referred to as “upper codeword symbols”).
In the fourth step, the N upper codeword symbols obtained in the third step are converted into a vector w _{j} (code selection information) by the mapping φ. In the fifth step,
The N vector w _{j} obtained in the fourth step is added to the N main member codewords obtained in the second step to calculate the N member codeword string. This is the code word of the code K _{I.}
Here, an error correction coding device for the code K _{I} will be described. FIG. 3 is a diagram showing the configuration of an error correction coding apparatus capable of coding with the code K _{I.} In the figure,
m is an information symbol sequence, m _{m} is an information symbol sequence for a member code, m _{s} is an information symbol sequence for an upper code, w _{i} is a selected representative vector, c is a code word of code K _{I} , and 45 is an information symbol A column division unit, 46 is a main member code encoder,
47 is an upper code encoder, 48 is a representative vector table, 49 is a representative vector selection unit, and 28 is an EXOR.
Next, the operation will be described. Information symbol string m
Is divided by the information symbol string division unit 45 to the information bits m _{s} to information bits m _{m} and the upper code to the member code. The information symbol string m _{m} for the member code is encoded by the main member code encoder 46 into a code word of the main member code. Further, independently of that, the information symbol string m _{s} to the upper code is encoded by the upper code encoder 47 into a code word of the upper code. A representative vector w _{i} is selected from the representative vector table 48 by the representative vector selection unit 49 using each symbol of the upper code. The selected representative vector w _{i} is subjected to exclusive OR processing by the EXOR 28 with respect to the code word of the main member code, and the code word c of the code K _{I} is obtained.
(3) Example of Decoding Procedure An example of the decoding algorithm of the code K _{I} will be given below.
First, in the first step, the likelihood (likelihood) sequence θ obtained from the received sequence is harddecided to generate a received word. In the second step, conversion using the mapping ψ is performed for each nbit block (received symbol) of hard decision bits, and one constituent symbol (decoding target symbol) of the upper code is estimated. Do this N times. In the third step, the codeword (intermediate received word) composed of the N upper codeword estimation symbols obtained in the second step is subjected to Bounded Distance Decodin
g; BDD).
In the fourth step, the N upper codeword symbols decoded in the third step are converted into a vector w _{j} by the mapping φ. In the fifth step, the maximum likelihood decoding (Maximum Likelihood Decoding; MLD) of the member code using the N vectors w _{j} obtained in the fourth step is performed for every n bits of the likelihood sequence θ to estimate the member code word. I do. In the sixth step, information symbols are extracted from each codeword estimated in the third step and the fifth step, and the decoding is completed.
Here, an error correction code decoding device for the code K _{I} will be described. FIG. 4 is a diagram showing the configuration of an error correction code decoding apparatus capable of decoding the reception sequence coded by the code K _{I.} In the figure, θ is a reception likelihood sequence, R is a hard decision symbol sequence, V ′ is an estimated upper code symbol sequence, c _{s} is an estimated upper code codeword, _{cm} is an estimated member code codeword, and w _{i} Is a selected representative vector, 55 is a hard decision unit, 56 is an upper code symbol estimation unit, 57 is an upper code decoder, 58 is a member code maximum likelihood decoder, 48 is a representative vector table, and 49 is a representative vector selection. It is a department.
Next, the operation will be described. The reception likelihood sequence θ is subjected to a hard decision for each bit by the hard decision unit 55, and a hard decision symbol sequence R is obtained. An estimated upper code symbol sequence V ′ is obtained from the hard decision symbol sequence R by using the upper code symbol estimation unit 56. The obtained estimated upper code symbol string V ′ is input to the upper code decoder 57, and the estimated upper code symbol c
_{s} is obtained. Each symbol of the estimated codeword c _{s} of the upper code is input to the representative vector selection unit 49, and the corresponding representative vector w _{i} is selected from the representative vector table 48. On the other hand, the reception likelihood sequence θ is the selected representative vector w
_{It} is input to the member code maximum likelihood decoder 58 using _{i} , and the estimated member code codeword _{cm} is obtained. That is, the maximum likelihood decoder 58 selects an estimated symbol for each symbol forming the received word from the code words belonging to the member code f _{i} selected by the representative vector w _{i} .
(4) Effects According to the code K _{I} which is an embodiment of the present invention, maximum likelihood decoding can be easily applied, whereby the decoding error characteristic can be improved.
In the third step of the decoding algorithm of the code K _{I} , the received symbol sequence of the upper code estimated from the hard decision symbol sequence is subjected to the limit distance decoding. The representative vector w _{i} of the corresponding member code is calculated in step 4 of the decoding algorithm from each symbol V _{i} of the code word of the estimated upper code,
## EQU9 ## w _{i} = φ (V _{i} ) (13) is determined. If this w _{i} is correctly determined, the class of the member code to which the received word r _{i} of the member code belongs is correctly determined. Therefore, even when the maximum likelihood decoding method is applied to r _{i} , an excellent decoding error rate is obtained. It is possible to obtain the characteristics. Usually, the error correction capability of the upper code is set so that the block error rate is sufficiently small with respect to the error rate of the defined communication path, so that the erroneous determination rate of w _{i} can be suppressed to be small.
The code length n of the member code is usually n =
Since the transmission rate k / n is set to about k / n≈1 / 2 for about 7 to 20, the total number of tables to be held is 2 ^{10} at most even when maximum likelihood decoding using a codeword table is used. =
It is about 1024 and can be easily realized. Note that this is determined independently of the code length of code K _{I.}
As described above, when decoding the code K _{I} , it is possible to easily apply the maximum likelihood decoding method to the received word of the member code, and it is necessary for the maximum likelihood decoding method. It is also possible to keep the amount of memory required to be extremely small compared to conventional methods. Therefore, the problem that it is difficult to apply the maximum likelihood decoding to the (n, k) code with the transmission rate k / n≈1 / 2 in the conventional method is solved by using the code K _{I} , and the decoding error It is possible to realize the improvement of the rate.
B. First Embodiment In the first embodiment, the main member code C _{m} is an expanded Hamming [2 ^{m} , 2 ^{m} −m] code, and the upper code C _{s} is a ReedSolomon [2 ^{m} −1, K]. The case will be described.
In this case, each member code is equivalent to a coset with the main member code C _{m} . Note that the extended Hamming code is a perfect code, so the weight of the coset leader for each coset can be represented by 2 or less.
In order to design the code K _{I} , it is necessary to set mappings φ and ψ for mapping between the symbols of the member code and the upper code represented by the equation (3). First, the representative vector w _{i} indicating the class to which the member code belongs
And defined in "2 ^{mdimensional} zero vector of weights 2 obtained by adding the overall parity 2 ^{m} 1 dimensional vector of weights 1 when the vector · 1 ≦ i ≦ 2 ^{m} 1 when · i = 0". This state is shown in FIG.
At this time, the mapping φ is defined as "Galois field GF
The element of (2 ^{m} ) is regarded as the syndrome of the Hamming [2 ^{m} 1,2 ^{m} m1] code, a 2 ^{m} 1 dimensional error vector is estimated, and a vector obtained by adding overall parity to this is taken as a value. Yes ”function. Since the Hamming code is a perfect code, the syndrome has a onetoone correspondence with all 2 ^{m} −1 dimensional vectors having a weight of 1 or less. Therefore, it is clear that the mapping φ gives a onetoone mapping from the elements on the Galois field GF (2 ^{m} ) to the representative vector w _{i} of the member code f _{i} .
[0057] When the mapping ψ to "2 ^{mdimensional} vector is included in any of the member code, Hamming [2 ^{m} 1, 2 against 2 ^{m} 1 dimensional vector excluding the overall parity part ^{m}  The value is the syndrome of the m1] code, while the 2 ^{m} dimensional vector is not included in any of the member codes, the erasure symbol ε is the value. ”
Give by a function.
By the property that the syndromes of linear codes for different cosets are different from each other, and the erasure symbol ε is used as a value when it is not included in any of the member codes, the mapping ψ is a 2 ^{m} dimensional vector from a Galois field. It is clear that it is a mapping onto GF (2 ^{m} ) ∪ {ε}. As described above, the constituent parameter of the code K _{I} is determined.
FIG. 6 shows the configuration of the encoding device for the code K _{I} based on this parameter, and FIG. 7 shows the configuration of the decoding device.
In FIG. 6, 60 is a serial input, and 61 is a serial input.
Is a bit after parallel conversion, 62 is an information bit to the extended Hamming encoder, 63 is an information bit to the ReedSolomon encoder, 64 is a code word bit of the extended Hamming code, and 65 is a ReedSolomon code. Codeword bit of
Reference numeral 66 is a bit of the representative vector, 67 is a code word bit of the code K _{I} , 68 is a serial output of the code word, 69 is a serialparallel conversion unit, 70 is an information symbol sequence dividing unit, and 71.
Is an expanded Hamming code encoder, 72 is a ReedSolomon code encoder, 73 is an expanded Hamming code syndrome decoder, 74 is a bit combination unit, 75 is a parallelserial conversion unit, and 28 is EXOR.
In the encoding device shown in the figure, the serial input 60 is input to the serialparallel converter 69 and converted into a parallel signal. The information bits converted into parallel are input to the information symbol sequence dividing unit 70, and are divided into information bits 62 to the extended Hamming encoder and information bits 63 to the ReedSolomon code encoder 72, which are respectively enlarged. It is input to the Hamming code encoder 71 and the ReedSolomon code encoder 72. In the expanded Hamming code encoder 71, the input information is expanded Hamming coded and becomes the code word bit 64 of the expanded Hamming code. Further, the information input to the ReedSolomon code encoder 72 is ReedSolomon coded and coded into the code word bit 65 of the ReedSolomon code. The code word bit 65 of the ReedSolomon code is input to the expanded Hamming code syndrome decoder 73. The expanded Hamming code syndrome decoder 73 has a function of mapping the symbols of the Galois field GF (2 ^{m} ) to the representative vector of the member code. The bits 66 of the representative vector determined by the expanded Hamming code syndrome decoder 73 are subjected to exclusive OR processing for each bit by the EXOR 28 with respect to the code word bit 64 of the expanded Hamming code and input to the coded bit combination unit 74. To be done. The coded bit combination unit 74 combines the codewords of the member codes into the codeword bit 67 of the code K _{I.} The code word bit 67 of the code K _{I} is parallel
The serial converter 75 converts the serial output 68 into a serial output 68, which is output as a code word of the code K _{I.}
Next, the decoding device shown in FIG. 7 will be described. In the figure, 60a is a serial (bit) input, 61a is a bit after parallel conversion, 76 is a likelihood series, and 77 is a likelihood series.
Is the estimated ReedSolomon code received symbol, 78
Is the estimated ReedSolomon codeword bit, 79
Is an estimated extended Hamming code codeword bit, 80 is an estimated codeword bit of the code K _{I} , 68a is a serial output of the codeword, 81 is a reception sequence division unit, 82 is an extended Hamming code maximum likelihood decoder, 83 Is a Hamming code syndrome calculator, 84 is a ReedSolomon code limit distance decoder, 7
3 is an expanded Hamming code syndrome decoder, 85 is an expanded Hamming code codeword table, 74 is a bit combination unit, 7
Reference numeral 5 is a parallelserial conversion unit, and 28 is an EXOR.
Next, the operation of this decoding device will be described. The serial input 60a (reception sequence) is input to the serialparallel converter 69 and converted into parallel. The received sequence converted into parallel is the expanded Hamming code maximum likelihood decoder 82.
Is input to the Hamming code syndrome calculation unit 83.
In the Hamming code syndrome calculation unit 83, after making a hard decision on the reception likelihood sequence 76, the syndrome of the Hamming code is calculated and output if it is an even weight vector, and the erasure symbol is output if it is an odd weight vector. . This operation is a Galois field GF (2 ^{m} ) ∪ from the 2 ^{m} dimensional vector space.
This corresponds to the mapping ψ to {ε}. The estimated ReedSolomon code received symbol 77 output from the Hamming code syndrome calculation unit 83 is input to the ReedSolomon code limit distance decoder 84, and is subjected to the limit distance decoding with erasure error correction to perform the ReedSolomon code code. Word bit 78 is estimated. The estimated ReedSolomon codeword bit 78 is input to the extended Hamming code syndrome decoder 73. The extended Hamming code syndrome decoder 73 determines the bit 66 of the representative vector. The bit 66 of the determined representative vector is the codeword table 85.
Is added to. The expanded Hamming code maximum likelihood decoder 82 estimates the expanded Hamming code codeword bit 79 using this vector and the reception likelihood sequence 76. The estimated ReedSolomon code word bit 78 and the estimated expanded Hamming code code word bit 79 are input to the decoded bit combining unit 74 and are used as the estimated code word bit 80 of the code K _{I.} The code word bit 80 is converted into a serial output 68a by the parallelserial conversion unit 75, and this is output as an estimated code word.
C. Second Embodiment In the second embodiment, the upper code C _{s} is the Galois field GF.
(2 ^{3} ) is an example in which the code K _{I} is configured by using the ReedSolomon [7,3,5] code above and the main member code C _{m} as the extended Hamming [8,4,4] code. In the present embodiment, the results of measuring the decoding error rate characteristics of the code K _{I} by numerical experiments are also shown.
The member code f _{i} is obtained by using the main member code C _{m} and the representative vector w _{i} .
F _{i} = C _{m} + w _{i} (0 ≦ i ≦ 2 ^{3} −1 = 7) It is expressed by (14). However, the representative vector w _{i} is a vector obtained by adding overall parity bits to one coset reader having a weight of 0 and seven coset readers having a weight of 1 for the coset of the Hamming [7,4,3] code. It is represented by. This state is shown in FIG.
Further, the mapping φ from the constituent symbols of the upper code to the eightdimensional representative vector is performed by treating the symbols on the Galois field GF (2 ^{3} ) as the syndromes of the Hamming [7,4,3] code. , A function that adds overall parity bits to the estimated vector ”.
Further, the mapping ψ from the 8dimensional vector to the constituent symbol of the upper code is "When the given 8dimensional vector is an odd weight vector, the lost symbol is taken as a value, and the given 8dimensional vector is an even weight vector. Is given as a function whose value is the syndrome of the Hamming [7,4,3] code for the vector obtained by deleting the overall parity bits from this vector.
The decoded block error rate characteristics when the code K _{I} configured as described above is decoded were obtained by numerical experiments. The result is shown by the solid line in FIG. However, the channel is assumed to be an additive white noise (AWGN) channel, and a block error occurs in the upper code (RS [7,3,5] code) of the code K _{I} and the member code (extended Hamming). [8,
4, 4] code) is counted even if the MLD for the code word is erroneous even once in the code word as a decoding block error.
The constructed code K _{I} has a code length n '= 5.
6, the number of information symbols k ′ = 37, and the number of check symbols g = 19. For comparison with the characteristics of this code K _{I} , the shortened BCH
The decoding block error probability when the [56, 38, 7] code is subjected to the limit distance decoding is also shown by a broken line in the figure. As shown in the figure, at the decoded block error probability of 1 × 10 ^{2} , the code K _{I} is about 0.7 compared to the shortened BCH code.
It has a coding gain of [dB].
D. Third Embodiment The third embodiment is an example in which the main member code is the expanded Golay code C _{m} [24, 12, 8]. In this case, each member code is equivalent to a coset by C _{m} . First, the representative vector w _{i} indicating the class to which the member code belongs
“• when i = 0, a 24dimensional zero vector, when 1 ≦ i ≦ 23, a vector of weight 2 in which an overall parity bit 1 is added to a 23dimensional vector of weight 1, and when 24 ≦ i ≦ 276, weight 2 A vector of weight 2 obtained by adding an overall parity bit 0 to a 23dimensional vector of and a vector of weight 4 obtained by adding an overall parity bit 1 to a 23dimensional vector of weight 3 when 277 ≦ i ≦ 2047. This state is shown in FIG.
By using the same mappings φ and ψ as when the main member code is the extended Hamming code, it is possible to realize the mapping between the member code and the symbol of the upper code on the Galois field GF (2 ^{12} ), The upper code is Galois field GF
The code K _{I} can be constructed by using the symbol error code above (2 ^{12} ) such as the ReedSolomon code.
FIG. 1 is a diagram showing a relationship between a member code and a representative vector.
FIG. 2 is a diagram illustrating a coding procedure of a code K _{I.}
FIG. 3 is a diagram showing a configuration of an error correction coding apparatus for a code K _{I.}
FIG. 4 is a diagram showing a configuration of an error correction code decoding apparatus for a code K _{I.}
FIG. 5 is a diagram showing a representative vector of each member code when the main member code is an expanded Hamming code.
FIG. 6 is a diagram showing a configuration of an error correction coding apparatus according to the first embodiment.
FIG. 7 is a diagram showing a configuration of an error correction code decoding apparatus according to the first embodiment.
FIG. 8 is a diagram showing a representative vector w _{i} of each member code when the expanded Hamming [8,4,4] code is used as a main member code.
[Fig. 9] ReedSolomon [7, 3, 5] with upper symbols
It is a figure which shows a decoding block error rate characteristic in case a code and a main member code are extended Hamming [8,4,4] codes.
FIG. 10 is a diagram showing a representative vector of each member code when the main member code is an expanded Golay code.
FIG. 11 is a diagram showing an example of a conventional encoder for a single code.
FIG. 12 is a diagram showing an example of an encoder for a conventional combination code.
FIG. 13 is a diagram showing an example of a conventional encoder for concatenated codes.
FIG. 14 is a diagram showing an example of a decoder using a conventional limit distance decoding method.
FIG. 15 is a diagram showing an example of a decoder using a conventional maximum likelihood decoding method.
28 EXOR, 45 Information symbol string division unit, 46
Main member code encoder, 47 Upper code encoder, 4
8 representative vector table, 49 representative vector selection unit, 55 hard decision unit, 56 upper code symbol estimation unit,
57 upper code decoder, 58 member code maximum likelihood decoder, 60, 60a serial input, 68, 68a serial output, 69 serialparallel conversion section, 70 information symbol sequence division section, 71 extended Hamming code encoder, 72 lead Solomon code encoder, 73 extended Hamming code syndrome decoder, 74 bit combination unit, 75 parallelserial conversion unit, 76 likelihood sequence,
81 reception sequence division unit, 82 extended Hamming code maximum likelihood decoder, 83 extended Hamming code syndrome calculation unit, 8
4 ReedSolomon code decoder, 85 codeword table.
Front page continuation (51) Int.Cl. ^{7} Identification code FI H03M 13/39 H03M 13/39 H04L 1/00 H04L 1/00 A 1/24 1/24 (56) Reference Masao Kasahara, Toru Haneda, IT99 −41: A few methods for encoding / decoding error correction codes, Technical Report of IEICE [Information Theory], Japan, July 23, 1999, IEICE Technical Report Vol. 99, No. 235, p. 4954 Toru Haneda, Masao Kasahara, IT9946: Performance of KI using mapping, IEICE Technical Report [Information Theory], Japan, September 16, 1999, IEICE Technical Report Vol. ． 99, No. 295, p. 1318 Masao Kasahara, IT9947: Generalized duplicate cyclic code (code KII), IEICE Technical Report [Information Theory], Japan, September 16, 1999, IEICE Technical Report Vol. ． 99, No. 295, p. 1924 (58) Fields surveyed (Int.Cl. ^{7} , DB name) H03M 13/00 G06F 11/10 H04L 1/00
Claims (12)
A received word regenerating means for regenerating the received word; an information symbol extracting means for extracting an information symbol contained in the received sequence based on the decoded intermediate received word and the regenerated received word; An error correction code decoding apparatus including:
A received word regeneration step of regenerating the received word; an information symbol extraction step of extracting an information symbol included in the received sequence based on the decoded intermediate received word and the regenerated received word; An error correction code decoding method comprising:
A received word regenerating means for regenerating the received word, and an information symbol extracting means for extracting an information symbol included in the reception sequence based on the decoded intermediate received word and the regenerated received word, A medium in which a program for operating a computer is recorded.
By mapping the original belonging to, a error correction coding method of configuring the codewords of the error correcting code, V _{i} (i
= 0,1, ..., N1) is an element of the Galois field GF (q), and u _{i} (i = 0,1, ..., m1) is the Galois field G.
An error correction coding method in which m of the element of F (p) is a positive integer and the order of the subset {u} is equal to the original number q of the Galois field GF (q).
1) is an error correction code on the Galois field GF (p), and w _{i} is f _{i} ∩f _{j} ≠ {φ} (i ≠ j; i, j = 0,
1, 2, ..., H1), which is an mdimensional vector on the Galois field GF (p) defined as
_{One} of the error correction codes f _{0 to} f _{H1} is selected based on _{i} , and the selected error correction code f _{j} (j = 0, 1, 2, ..., Based on the remaining part of the information symbol. An error correction coding method, characterized in that one of the code words belonging to H1) is selected as the mapping destination.
Decoding by the predetermined error correction code C _{s} on F (q),
An error correction code decoding method for generating an estimated code word V ′ of (V _{0} ′, V _{1} ′, ..., _{VN−1} ′), wherein r _{i} (i
= 0, 1, ..., N−1) is an mdimensional vector on the Galois field GF (p).
_{N1} ) is acquired, and each r _{i} (i = 0, 1, 2, ..., N1) is (u) based on the reception likelihood (θ _{0} , θ _{1} , ..., θ _{N1} ). _{0} , u _{1} , ..., U _{m1} ) given Galois field GF (p)
An error correction code decoding method characterized by performing maximum likelihood decoding for each r _{i} as belonging to a subset {u} of the above mdimensional vector.
1) is an error correction code on the Galois field GF (p), and w _{i} is f _{i} ∩f _{j} ≠ {φ} (i ≠ j; i, j = 0,
1, 2, ..., H1) is an mdimensional vector on the Galois field GF (p), which is defined as V _{i} '(i
= 0,1,2, ..., the select one of the error correction code f _{0} to f _{H1} based on the N1), is r _{i} corresponding to the V _{i,} the error correction code selected f _{j} (j = 0, 1,
2, ..., N1), which is one of the codewords belonging to
An error correction code decoding method characterized by performing maximum likelihood decoding for the r _{i} .
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

JP20719599A JP3451221B2 (en)  19990722  19990722  Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

JP20719599A JP3451221B2 (en)  19990722  19990722  Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium 
Publications (2)
Publication Number  Publication Date 

JP2001036417A JP2001036417A (en)  20010209 
JP3451221B2 true JP3451221B2 (en)  20030929 
Family
ID=16535827
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

JP20719599A Expired  Fee Related JP3451221B2 (en)  19990722  19990722  Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium 
Country Status (1)
Country  Link 

JP (1)  JP3451221B2 (en) 
Families Citing this family (31)
Publication number  Priority date  Publication date  Assignee  Title 

US6320520B1 (en) *  19980923  20011120  Digital Fountain  Information additive group code generator and decoder for communications systems 
US6307487B1 (en)  19980923  20011023  Digital Fountain, Inc.  Information additive code generator and decoder for communication systems 
US7068729B2 (en)  20011221  20060627  Digital Fountain, Inc.  Multistage code generator and decoder for communication systems 
US9240810B2 (en)  20020611  20160119  Digital Fountain, Inc.  Systems and processes for decoding chain reaction codes through inactivation 
US9419749B2 (en)  20090819  20160816  Qualcomm Incorporated  Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes 
US9288010B2 (en)  20090819  20160315  Qualcomm Incorporated  Universal file delivery methods for providing unequal error protection and bundled file delivery services 
JP3973026B2 (en) *  20020830  20070905  富士通株式会社  Decoding device, decoding method, and program for causing processor to perform the method 
KR101143282B1 (en)  20021005  20120508  디지털 파운튼, 인크.  Systematic encoding and decoding of chain reaction codes 
US7139960B2 (en)  20031006  20061121  Digital Fountain, Inc.  Errorcorrecting multistage code generator and decoder for communication systems having single transmitters or multiple transmitters 
JP4971144B2 (en)  20040507  20120711  デジタル ファウンテン， インコーポレイテッド  File download and streaming system 
US9432433B2 (en)  20060609  20160830  Qualcomm Incorporated  Enhanced blockrequest streaming system using signaling or block creation 
US9380096B2 (en)  20060609  20160628  Qualcomm Incorporated  Enhanced blockrequest streaming system for handling lowlatency streaming 
US9178535B2 (en)  20060609  20151103  Digital Fountain, Inc.  Dynamic stream interleaving and substream based delivery 
US9386064B2 (en)  20060609  20160705  Qualcomm Incorporated  Enhanced blockrequest streaming using URL templates and construction rules 
US9209934B2 (en)  20060609  20151208  Qualcomm Incorporated  Enhanced blockrequest streaming using cooperative parallel HTTP and forward error correction 
US9136983B2 (en)  20060213  20150915  Digital Fountain, Inc.  Streaming and buffering using variable FEC overhead and protection periods 
US9270414B2 (en)  20060221  20160223  Digital Fountain, Inc.  Multiplefield based code generator and decoder for communications systems 
JP4662367B2 (en) *  20060418  20110330  共同印刷株式会社  Information symbol encoding method and apparatus, information symbol decoding method and decoding apparatus 
WO2007134196A2 (en)  20060510  20071122  Digital Fountain, Inc.  Code generator and decoder using hybrid codes 
RU2010114256A (en)  20070912  20111020  Диджитал Фаунтин, Инк. (Us)  Formation and transmission of original identification information to ensure reliable data exchange 
EP2178215A1 (en) *  20081016  20100421  Thomson Licensing  Method for error correction and error detection of modified array codes 
US9281847B2 (en)  20090227  20160308  Qualcomm Incorporated  Mobile reception of digital video broadcasting—terrestrial services 
US9917874B2 (en)  20090922  20180313  Qualcomm Incorporated  Enhanced blockrequest streaming using block partitioning or request controls for improved clientside handling 
US20110280311A1 (en)  20100513  20111117  Qualcomm Incorporated  Onestream coding for asymmetric stereo video 
US9596447B2 (en)  20100721  20170314  Qualcomm Incorporated  Providing frame packing type information for video coding 
US9319448B2 (en)  20100810  20160419  Qualcomm Incorporated  Trick modes for network streaming of coded multimedia data 
US9270299B2 (en)  20110211  20160223  Qualcomm Incorporated  Encoding and decoding using elastic codes with flexible source block mapping 
US8958375B2 (en)  20110211  20150217  Qualcomm Incorporated  Framing for an improved radio link protocol including FEC 
US9253233B2 (en)  20110831  20160202  Qualcomm Incorporated  Switch signaling methods providing improved switching between representations for adaptive HTTP streaming 
US9843844B2 (en)  20111005  20171212  Qualcomm Incorporated  Network streaming of media data 
US9294226B2 (en)  20120326  20160322  Qualcomm Incorporated  Universal object delivery and templatebased file delivery 

1999
 19990722 JP JP20719599A patent/JP3451221B2/en not_active Expired  Fee Related
NonPatent Citations (3)
Title 

笠原正雄，ＩＴ９９−４７：一般化重複巡回符号（符号ＫＩＩ）について，電子情報通信学会技術研究報告［情報理論］，日本，１９９９年 ９月１６日，信学技報Ｖｏｌ．９９、Ｎｏ．２９５，ｐ．１９−２４ 
笠原正雄、羽田亨，ＩＴ９９−４１：誤り訂正符号の符号化・復号に関する二，三の手法，電子情報通学会技術研究報告［情報理論］，日本，１９９９年 ７月２３日，信学技報Ｖｏｌ．９９，Ｎｏ．２３５，ｐ．４９−５４ 
羽田亨、笠原正雄，ＩＴ９９−４６：マッピングを利用したＫＩのパフォーマンス，電子情報通信学会技術研究報告［情報理論］，日本，１９９９年 ９月１６日，信学技報Ｖｏｌ．９９，Ｎｏ．２９５，ｐ．１３−１８ 
Also Published As
Publication number  Publication date 

JP2001036417A (en)  20010209 
Similar Documents
Publication  Publication Date  Title 

US10686473B2 (en)  Encoding method and apparatus using CRC code and polar code  
RU2571587C2 (en)  Method and device for encoding and decoding data in convoluted polar code  
JP5524287B2 (en)  Inplace transform with application to encoding and decoding of various code classes  
AU2017326022B2 (en)  Method and apparatus for encoding data using a polar code  
RU2595542C2 (en)  Device and method for transmitting and receiving data in communication/broadcasting system  
Schmidt et al.  Collaborative decoding of interleaved Reed–Solomon codes and concatenated code designs  
EP1980041B1 (en)  Multiplefield based code generator and decoder for communications systems  
EP0728390B1 (en)  Method and apparatus for decoder optimization  
US7293222B2 (en)  Systems and processes for fast encoding of hamming codes  
JP3544033B2 (en)  Punctured convolutional encoding method and apparatus  
US6769091B2 (en)  Encoding method and apparatus using squished trellis codes  
US7260766B2 (en)  Iterative decoding process  
US6477680B2 (en)  Areaefficient convolutional decoder  
US6543023B2 (en)  Paritycheck coding for efficient processing of decoder error events in data storage, communication and other systems  
JP3328093B2 (en)  Error correction device  
CN1210872C (en)  Reduced search symbol estimation algorithm  
JP4773356B2 (en)  Error correcting multistage code generator and decoder for a communication system having a single transmitter or multiple transmitters  
US6694478B1 (en)  Low delay channel codes for correcting bursts of lost packets  
CN100355201C (en)  Reduced soft output information packet selection  
CN1770639B (en)  Concatenated iterative and algebraic coding  
Honary et al.  Trellis decoding of block codes: A practical approach  
KR101297060B1 (en)  Multidimensional block encoder with subblock interleaver and deinterleaver  
JP3917563B2 (en)  Method and system for decoding low density parity check (LDPC) codes  
US7956772B2 (en)  Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes  
CN101553990B (en)  Determination of interleaver sizes for turbo codes 
Legal Events
Date  Code  Title  Description 

TRDD  Decision of grant or rejection written  
R150  Certificate of patent or registration of utility model 
Ref document number: 3451221 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20080711 Year of fee payment: 5 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20090711 Year of fee payment: 6 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20100711 Year of fee payment: 7 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20100711 Year of fee payment: 7 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20110711 Year of fee payment: 8 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20110711 Year of fee payment: 8 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20120711 Year of fee payment: 9 

FPAY  Renewal fee payment (event date is renewal date of database) 
Free format text: PAYMENT UNTIL: 20130711 Year of fee payment: 10 

LAPS  Cancellation because of no payment of annual fees 