JP3451221B2 - Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium - Google Patents

Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium

Info

Publication number
JP3451221B2
JP3451221B2 JP20719599A JP20719599A JP3451221B2 JP 3451221 B2 JP3451221 B2 JP 3451221B2 JP 20719599 A JP20719599 A JP 20719599A JP 20719599 A JP20719599 A JP 20719599A JP 3451221 B2 JP3451221 B2 JP 3451221B2
Authority
JP
Japan
Prior art keywords
code
error correction
symbol
intermediate
symbols
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP20719599A
Other languages
Japanese (ja)
Other versions
JP2001036417A (en
Inventor
正雄 笠原
Original Assignee
日本無線株式会社
正雄 笠原
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本無線株式会社, 正雄 笠原 filed Critical 日本無線株式会社
Priority to JP20719599A priority Critical patent/JP3451221B2/en
Publication of JP2001036417A publication Critical patent/JP2001036417A/en
Application granted granted Critical
Publication of JP3451221B2 publication Critical patent/JP3451221B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Description

Detailed Description of the Invention

[0001]

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium, and more particularly to a novel error correction code configuration having excellent decoding error rate characteristics.

[0002]

2. Description of the Related Art Generally, a block error correction code is represented by a subspace C of an n-dimensional vector space whose elements are elements on a Galois field GF (q). The operation of mapping the element (information vector) m belonging to the k-dimensional vector space on the Galois field GF (q) to the element (codeword) c belonging to the block error correction code C on a one-to-one basis is called the coding of the error correction code C. This block error correction code is called (n, k) code. Error correction code C
A device that performs the encoding of 1 is referred to as an encoder (encoding device) for the error correction code C. When transmitting the codeword c of the block error correction code C, an n-dimensional vector e occurs as an error,

## EQU00001 ## When an n-dimensional vector (reception vector) such that r = c + e (1) is received,
The error vector e is estimated, and the codeword c is calculated from the received vector r.
The operation of extracting is called decoding of the error correction code C. A device that decodes the error correction code C is called a decoder (decoding device) for the error correction code C. Further, the minimum value of the Hamming distance between the code words of the (n, k) block error correction code C is called the minimum distance d. The maximum value of the weight of the error vector e that allows the codeword c to be extracted from the reception vector r shown in Expression (1) is determined by d.

Conventionally, various block error correction codes have been devised. For cyclic codes such as Hamming code, BCH code, and Reed-Solomon code, which are typical block error correction codes, the code is performed by operating the encoder 100 once on the information vector m, as shown in FIG. The conversion is completed and the code word c of the code C is generated.

On the other hand, for a code which is a combination of codes to form a new code, for example, a concatenated code, a product code, or a superposition code, as shown in FIG. 1 to 102-J are operated to sequentially perform encoding by the codes C 1 to C J ,
As a whole, the code word c of the code C can be obtained.

FIG. 13 shows an example of another encoder that realizes the conventional encoding method. This example relates to a concatenated code in which the outer code is a Reed-Solomon (15,11) code and the inner code is a Hamming (7,4) code. In the figure, m is an information symbol, c is a code word, (m 0 , ...,
m 10 ) is the divided information vector, (c 0 , ..., C 14 ).
Is a Reed-Solomon codeword, 106 is an information vector division unit, 108 is a Reed-Solomon (15,11) code encoder, and 110 is a Hamming (7,4) code encoder.

The operation of this encoder will be described. First, the information vector m is divided by the information vector division unit 106 into blocks (m 0 , ..., M 10 ) having a length of 4 bits. These 4-bit length blocks are input to the Reed-Solomon (15,11) code encoder 108, and the Reed-Solomon (15,11) codewords (c 0 , ...,
c 14 ). Each symbol c j (j = 1 to 14 ) of these Reed-Solomon codewords (c 0 , ..., C 14 ).
Is input to the encoder 110 of the Hamming (7,4) code and encoded into the code word c of the concatenated code.

On the other hand, the conventional block error correction code decoding method includes a limit distance decoding method for performing decoding using a symbol (hard-decision symbol) sequence R having a fixed value for each symbol, and a reception likelihood given for each symbol. It is roughly classified into a maximum likelihood decoding method that performs decoding using the degree sequence θ.

The former limit distance decoding method is a certain code word c
Is a decoding method in which all received vectors r that are at a Hamming distance t or less are decoded into a codeword c. However, t is the number of correctable errors determined by the minimum distance d of the block error correction code. The limit distance decoding method can be executed by algebraic calculation, and the circuit scale when it is realized can be suppressed to be small. Euclidean decoding method and Berlekamp-Massie decoding method are known as typical well-known limit distance decoding methods.

On the other hand, the latter maximum likelihood decoding method is a decoding method for estimating the code word c that maximizes the conditional probability P (r | c) for the received vector r. Since the maximum likelihood decoding method generally calculates the conditional probability P (r | c) for all codewords, the circuit scale becomes large. However, it has a feature that it is superior in decoding error rate characteristics as compared with the above-mentioned limit distance decoding method. There are two known well-known maximum likelihood decoding methods: a method using a codeword table and a Wolf's method using a trellis.

FIG. 14 shows an example of a conventional decoder that implements the limit distance decoding method. In the figure, R is a hard decision symbol sequence, s is a syndrome, e is an estimated error vector, c'is an estimated codeword, 112 is a syndrome calculation unit, 114 is a Euclidean decoder, 116 is EXOR (exclusive OR). Is. In this decoder. The syndrome calculation unit 112 calculates the syndrome s from the received hard decision symbol string R. Using this syndrome s, the error vector e is estimated by the Euclidean decoder 114 using the Euclidean decoding method which is a typical limit distance decoding method. The estimated error vector e is EXOR1
16 performs exclusive OR processing with the hard-decision symbol sequence R to obtain the estimated code word c ′.

Further, FIG. 15 shows an example of a conventional decoder which realizes the maximum likelihood decoding method using a code word table. In the figure,
θ is a reception likelihood sequence, U is a decision variable, and 120-1 to 120-
M is a correlator, 122-1 to 122-M are codeword tables, 124 is a maximum value determination unit, and 126 is a codeword selector. In this decoder, the correlation value between the reception likelihood sequence θ and the codewords c 0 to C M-1 stored in the codeword tables 122-1 to 122-M is the correlators 120-1 to 120-M.
Calculated by Then, the maximum value of the calculated correlation values is calculated by the maximum value judgment unit 124, and the value becomes the judgment variable U. The code word selector 126 follows the code word table 122 according to the code word index giving the decision variable U.
Any one of the code words c 0 to C M-1 stored in −1 to 122-M is selected, and the selected code word is set as the estimated code word c ′.

[0012]

It is known that the maximum likelihood decoding method is currently the decoding method having the best decoding error rate characteristic when the communication is performed using the (n, k) block error correction code. ing.

Among known methods for realizing the maximum likelihood decoding method,
Like the decoder shown in FIG. 15, the codeword table 122-
In the method using 1-122-M, the total size of the codeword table 122 to be prepared is proportional to 2 k . Although not particularly shown, even in the Wolf method using a trellis, the total number of state holding registers to be prepared inside the decoder is proportional to 2 nk .

Therefore, when the code length n is large and the number of information symbols k is close to n / 2, that is, the transmission rate k /
In the case of n≈1 / 2 , it is necessary to prepare a table or a register having a size approximately proportional to 2 n / 2 by using any known method for realizing maximum likelihood decoding. However, even if this numerical value is a short code with a code length n = 100, it is 2 50
Therefore, it is very difficult to implement the maximum likelihood decoding method for such an (n, k) block error correction code.

Therefore, conventionally, in order to realize a decoding device for (n, k) block error correction codes in which k / n≈1 / 2, the limit distance decoding method must be adopted, which is the decoding error rate. It was one of the causes of the deterioration of the characteristics.

The present invention has been made in view of the above problems, and an object thereof is an error correction coding apparatus capable of improving the decoding error rate characteristic without requiring a large size table or register. A method and medium, and an error correction code decoding apparatus, method, and medium are provided.

[0017]

(1) In order to solve the above problems, an error correction coding apparatus according to the present invention is an error correction coding method for error correcting coding an information symbol to generate an error correction codeword. An apparatus, comprising: an information symbol dividing means for dividing a plurality of original symbols constituting the information symbol into a first symbol group and a second symbol group; and encoding the first symbol group with a predetermined error correction code. , An intermediate codeword generating means for generating an intermediate codeword composed of a plurality of intermediate symbols, and a code for converting each intermediate symbol forming the intermediate codeword into code selection information designating one of a predetermined code group The selection information generating means and the second symbol group are reconfigured into the same number of symbols as the intermediate symbols, each of them is encoded with a predetermined code, and a plurality of code word selection information corresponding to each of the intermediate symbols is generated. Codeword selection information generating means for,
For each of the intermediate symbols, one of the predetermined code groups is selected based on the corresponding code selection information, and one of the selected codes is selected based on the corresponding code word selection information. Codeword selecting means for selecting a codeword, and error correction codeword generating means for generating the error correction codeword based on the codeword selected for each of the intermediate symbols. .

Further, the error correction coding method according to the present invention is an error correction coding method for error correcting coding an information symbol to generate an error correction code word, wherein a plurality of original symbols forming the information symbol are used. Is divided into a first symbol group and a second symbol group, and the first symbol group is encoded by a predetermined error correction code to generate an intermediate code word composed of a plurality of intermediate symbols. An intermediate code word generating step, a code selection information generating step of converting each intermediate symbol forming the intermediate code word into code selection information designating one of a predetermined code group, and the second symbol group having the intermediate symbol And a code word selection information generation step for generating a plurality of code word selection information corresponding to each of the intermediate symbols by reconstructing the same number of symbols as the above and encoding each with a predetermined code. And, for each of the intermediate symbols, selects one code from the predetermined code group based on the corresponding code selection information, and based on the corresponding codeword selection information, from the selected code A codeword selecting step of selecting one codeword, and an error correction codeword generating step of generating the error correction codeword based on the codeword selected for each of the intermediate symbols. And

Further, the medium according to the present invention is a medium in which a program for causing a computer to function as an error correction coding device for error correction coding an information symbol to generate an error correction code word is recorded. An information symbol dividing means for dividing a plurality of original symbols constituting a symbol into a first symbol group and a second symbol group, and a plurality of intermediate symbols obtained by encoding the first symbol group with a predetermined error correction code. Intermediate codeword generating means for generating an intermediate codeword, and code selection information generating means for converting each intermediate symbol forming the intermediate codeword into code selection information designating one of a predetermined code group, The second symbol group is reconfigured into the same number of symbols as the intermediate symbols, each of which is encoded with a predetermined code, and a plurality of code word selection information corresponding to each of the intermediate symbols. Codeword selection information generating means for generating, and for each of the intermediate symbols, select one code from the predetermined code group based on the corresponding code selection information, based on the corresponding codeword selection information Codeword selecting means for selecting one codeword from the selected code, and error correction codeword generation for generating the error correction codeword based on the codeword selected for each of the intermediate symbols. A program for operating the computer as a means is recorded.

According to the present invention, a plurality of original symbols forming an information symbol are divided into a first symbol group and a second symbol group, of which the first symbol group is encoded by a predetermined error correction code and the intermediate symbol A codeword is generated. On the other hand, the second symbol group is reconfigured into the same number of symbols as the intermediate symbols, each of which is encoded with a predetermined code, and this is codeword selection information corresponding to each intermediate symbol. Further, each of the plurality of intermediate symbols forming the intermediate codeword is converted into code selection information. Then, for each intermediate symbol, a code (a set of code words) is selected by the code selection information, and a specific code word is selected by the code word selection information. Then, an error correction codeword is generated based on all the selected codewords.

(2) Next, the error correction code decoding apparatus according to the present invention includes a received word provisional generating means for provisionally generating a received word composed of a plurality of received symbols based on a reception sequence, and the received symbol. Intermediate for converting each into a decoding processing target symbol in a predetermined error correction code and generating an intermediate reception word composed of the same number of intermediate symbols as the reception symbols, each intermediate symbol being the decoding processing target symbol Received word generation means, error correction code decoding means for decoding the intermediate received word with the predetermined error correction code, and each intermediate symbol forming the decoded intermediate received word is designated one of a predetermined code group. Code selection information generating means for converting into code selection information, and each of the received symbols from the codes specified by the code selection information based on the reception sequence. Received word regenerating means for selecting and regenerating the received word, and an information symbol for extracting an information symbol included in the received sequence based on the decoded intermediate received word and the regenerated received word. And an extracting unit.

Further, the error correction code decoding method according to the present invention comprises: a received word provisional generating step for temporarily generating a received word composed of a plurality of received symbols based on a received sequence; and a predetermined error for each of the received symbols. An intermediate received word generating step of converting into a decoding process target symbol in a correction code and generating an intermediate received word composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding process target symbol And an error correction code decoding step of decoding the intermediate received word with the predetermined error correction code, and converting each intermediate symbol forming the decoded intermediate received word into code selection information for designating one of a predetermined code group. A step of generating code selection information to be converted, and based on the reception sequence, the reception symbol from the codes designated by the code selection information. A received word regeneration step of regenerating the received word by selecting each of the received words, and an information symbol included in the received sequence based on the decoded intermediate received word and the regenerated received word. And a step of extracting an information symbol to be extracted.

Also, the medium according to the present invention includes a received word provisional generating means for provisionally generating a received word composed of a plurality of received symbols based on a reception sequence, and decoding each of the received symbols in a predetermined error correction code. Intermediate received word generating means for converting into intermediate processing words to be processed and generating intermediate decoding words composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding processing target symbol; Error correction code decoding means for decoding a received word by the predetermined error correction code, and code selection for converting each intermediate symbol constituting the decoded intermediate received word into code selection information designating one of a predetermined code group. An information generating means, and selecting each of the received symbols from the codes specified by the code selection information based on the received sequence, A computer is caused to function as a received word regenerating unit for regenerating, and an information symbol extracting unit for extracting an information symbol included in the reception sequence based on the decoded intermediate received word and the regenerated received word. Have recorded a program for.

According to the present invention, the received word is tentatively generated based on the received sequence, each of the received symbols constituting the received word is converted into the decoding processing target symbol in the predetermined error correction code, and the decoding processing target symbol is It is an intermediate symbol that constitutes an intermediate received word. The decoding target symbol is
For example, it can be composed of a union of a symbol that is an element of a predetermined Galois field of a predetermined error code and an erasure symbol. Then, the intermediate received word is decoded by the predetermined error correction code. The intermediate symbols forming the intermediate received word, which is the decoding result, are each converted into code selection information. The code selection information is information that specifies one of the predetermined code groups. Then, based on the reception sequence, a specific code word is selected from the code specified by the code selection information corresponding to each reception symbol, and the reception word is regenerated based on that. here,
The term "based on the reception sequence" broadly includes the case based on information derived from the reception sequence. Specifically, it also includes a case based on a likelihood sequence generated based on a received sequence and a case based on a received word.

According to the present invention, when the received word is regenerated, it is sufficient to select a specific code from the codes specified by the code selection information. Therefore, the predetermined error correction code is selected so that the code selection information is correctly determined. By selecting, the decoding error rate characteristic can be improved.

Further, since the length of the received symbol, that is, the code length of each of the predetermined code groups can be shortened, the maximum likelihood decoding using a code word table, for example, is performed at the time of regeneration of the above-mentioned received word. When trying to use the same method as the case, the number of codeword tables to be held can be reduced. Therefore, it is easy to use the same method as the case of the maximum likelihood decoding for the regeneration of the received word described above, and in this case, the decoding error rate characteristic can be further improved.

(3) Further, the error correction coding method according to the present invention uses the predetermined Galois field G of (V 0 , V 1 , ..., V N-1 ).
Each symbol V i of the code word V of the predetermined error correction code C s on F (q) is an m- dimensional vector on a predetermined Galois field GF (p) of (u 0 , u 1 , ..., U m-1 ). An error correction coding method for forming a codeword of an error correction code by mapping to an element belonging to a subset {u} of V i (i = 0,
, ..., N−1) is an element of the Galois field GF (q), u i
(I = 0, 1, ..., M-1) is the Galois field GF (p)
, M is a positive integer, and the order of the subset {u} is equal to the element number q of the Galois field GF (q). By doing so, it is possible to improve the decoding error rate characteristic by appropriately selecting a subset {u} of m-dimensional vectors and increasing the minimum distance between the m-dimensional vectors, for example.

In this case, the code word V may be generated based on a part of the information symbol, and the mapping destination of the symbol V i may be determined based on the rest of the information symbol. In this way, a part of the information symbols can be encoded when mapping to the subset {u}.

Further, the subset {u} is f 0 ∪f 1
... ∪F determined by H-1, select one of the error correction code f 0 to f H-1 based on each symbol V i, based on the remainder of the information symbols, the error correction code f is the selected j
One of the code words belonging to (j = 0, 1, 2, ..., H-1) may be selected as the mapping destination. Here, f 0 and f i = f 0 + w i (i = 1, 2, ..., H−1) are error correction codes on the Galois field GF (p), and w i is f i ∩f
j ≠ {φ} (i ≠ j; i, j = 0,1,2, ..., H-
1) An m-dimensional vector on the Galois field GF (p) defined to be 1). Also, {φ} is an empty set.
By doing so, when one of the code words belonging to any of the error correction codes f j (j = 0, 1, 2, ..., H-1) is selected based on the remaining part of the information symbol, for example, an error is generated. First, one of the code words belonging to the correction code f 0 is selected, and w i is added to the selected code word to obtain a desired error correction code f j (j = 0, 1, 2, ..., H- Regarding 1), one of the code words can be easily selected.

(4) Further, the error correction code decoding method according to the present invention is based on the received word r of (r 0 , r 1 , ..., R N-1 ) and r i (i = 0, 1, ... , N-1) is mapped to the union of the predetermined Galois field GF (q) and the erasure symbol {ε} to generate an N-dimensional vector of (R 0 , R 1 , ..., RN-1 ), Let the N-dimensional vector be the Galois field GF
(Q) Decode with the predetermined error correction code C s above,
(V 0 ', V 1 ', ..., V N-1 ') is generated. Where r i (i = 0,
1, ..., N−1) are m-dimensional vectors on the Galois field GF (p). By doing this, (V 0 , V 1 , ...,
V N-1) becomes the Galois field GF (q) a predetermined error correction code C s each symbol V i codeword V of the on the (u 0, u 1, ... ,
u m-1 ), if a code word is configured by being mapped to an element belonging to a subset {u} of the m-dimensional vector on the Galois field GF (p), the received word r corresponding to the code word Can be appropriately decoded.

In this case, the reception likelihoods (θ 0 , θ 1 , ..., θ N-1 ) corresponding to the reception word r are acquired, and the reception likelihoods (θ 0 , θ 1 , ..., θ N-). 1 ), each r i (i = 0,
, 1, ..., N-1) is assumed to belong to a subset {u} of m-dimensional vectors on a predetermined Galois field GF (p) in which (u 0 , u 1 , ..., U m-1 ) Maximum likelihood decoding for i may be performed. Where θ i (i = 0, 1, ...,
N-1) corresponds to each symbol V i ′ of the estimated code word V ′, and is an m-dimensional vector indicating to what value the values of these symbols V i ′ are close. For example, an m-dimensional vector having real numbers as elements can be used as the θ i . By doing this, maximum likelihood decoding can be performed for each r i , maximum likelihood decoding can be performed without the need for a large table or register, and decoding error rate characteristics can be improved.

Further, the subset {u} is converted into f 0 ∪f 1 ∪ ...
∪ f H-1 and each V i '(i = 0, 1, 2, ..., N
Wherein selecting one of the error correction code f 0 to f H-1 based on the -1), r i corresponding to the V i is selected error correcting code f j (j = 0,1,2 , ..., N−1), the maximum likelihood decoding may be performed for r i . Where f 0 and f i = f 0
+ W i (i = 1,2, ..., H-1) is the Galois field GF
(P) is an error correction code, and w i is f i ∩f j
It is an m-dimensional vector on the Galois field GF (p) defined so that {φ} (i ≠ j; i, j = 0, 1, 2, ..., H-1). In this way, the error correction code f i (i =
Since there are relatively few codewords belonging to 1, 2, ..., H-1), maximum likelihood decoding can be easily realized with a small amount of information processing without using a large table or register.

[0033]

BEST MODE FOR CARRYING OUT THE INVENTION Preferred embodiments of the present invention will now be described in detail with reference to the drawings.

Here, as one embodiment of the present invention,
A code to which the maximum likelihood decoding method can be easily applied and which has an excellent decoding error rate characteristic as compared with the conventional block error correction code (herein, particularly referred to as “code K I ”) will be described.
For simplicity, only the extension field GF (q) with q = 2 m will be described below, but the present invention is similarly applicable to q = p m (p is a prime number).

A. Principle (1) Configuration code K I code K I is composed of the upper code (Supervising code) and a plurality of members code (Member code). The upper code is generally a symbol error correction code, and its code length is N,
Let K be the number of information symbols and m be the symbol length. Generally, the codeword of the upper code is

[Expression 2] V = (V 0 , V 1 , ..., V N−1 ) (2) When the upper code is a systematic code, the first K symbols are information symbols and the latter NK symbols are check symbols.

Next, the member code f j (j = 0 to H-
1) is a code having a code length n and the number of information symbols k, and generally,

F j = C m + w j (0 ≦ j ≦ H−1) It is expressed by (3). C m is the primary member code
de), and either linear code or non-linear code can be used. When the main member code is a linear code, each f j expressed by the equation (3) is equivalent to the coset by C m , and when the main member code is a non-linear code, each f j expressed by the equation (3) f j is equivalent to translate by C m . The relationship between the main member code and the member code is shown in FIG.

The representative vector w j representing the class to which the member code f j belongs and the symbol V i of the upper code must be selected so that they can be mutually converted by an appropriate mapping. That is, between the representative vector of the member code and the symbol of the upper code,

[Equation 4] A one-to-one mapping φ from the Galois field GF (2 m ) to the set W of the representative vector w j of the member code, and

[Equation 5] It is necessary that there exists a mapping ψ on the union GF (2 m ) ∪ {ε} of the Galois field GF (2 m ) and the erasure symbol {ε} from the n-dimensional vector space V n . However, V'is

[Equation 6] Shall be calculated in.

Finally, the relationship between the upper code and the member code will be described. A part of the information symbol is encoded by the main member code C m , and the main member code word string is obtained. The remaining information symbols are encoded by the upper code, and the representative vector w j corresponding to the symbol encoded by the upper code is mapped φ.
Selected by. By sequentially adding this vector to the obtained codeword of the main member code, a member codeword string is obtained, and this is taken as the codeword of code K I. This state is shown in FIG.

From the above, the code word of the code K I is

## EQU7 ## K I ≡ {x = (y 0 , y 1 , ..., y N-1 ): y i ∈f ji } (7) f ji = C m + w ji (8) w ji = φ (V i ) (9) V = (V 0 , V 1 , ..., V N-1 ) It is defined by (10). Where f ji is the member code and w ji is f
is a representative vector of ji , and V is a code word of the upper code. Therefore, the code length n ′ of the code K I and the number of information symbols k ′ are respectively

[Equation 8]       n ′ = n · N (11)       k '= k * N + m * K (12) It is represented by.

(2) Example of Encoding Procedure An example of the encoding algorithm of the code K I will be given below.

First, in the first step, a k'bit information symbol is divided into kN bits (second symbol group) and mK bits (first symbol group). In the second step, k
With respect to N bits, encoding is performed by the main member code C m for every k bits to obtain N code words (code word selection information). In the third step, mK bits are subjected to upper coding (error correction coding) by using them as m-bit K symbols. As a result, a check symbol is added and expanded into m-bit N symbols (intermediate symbols; here referred to as “upper codeword symbols”).

In the fourth step, the N upper codeword symbols obtained in the third step are converted into a vector w j (code selection information) by the mapping φ. In the fifth step,
The N vector w j obtained in the fourth step is added to the N main member codewords obtained in the second step to calculate the N member codeword string. This is the code word of the code K I.

Here, an error correction coding device for the code K I will be described. FIG. 3 is a diagram showing the configuration of an error correction coding apparatus capable of coding with the code K I. In the figure,
m is an information symbol sequence, m m is an information symbol sequence for a member code, m s is an information symbol sequence for an upper code, w i is a selected representative vector, c is a code word of code K I , and 45 is an information symbol A column division unit, 46 is a main member code encoder,
47 is an upper code encoder, 48 is a representative vector table, 49 is a representative vector selection unit, and 28 is an EXOR.

Next, the operation will be described. Information symbol string m
Is divided by the information symbol string division unit 45 to the information bits m s to information bits m m and the upper code to the member code. The information symbol string m m for the member code is encoded by the main member code encoder 46 into a code word of the main member code. Further, independently of that, the information symbol string m s to the upper code is encoded by the upper code encoder 47 into a code word of the upper code. A representative vector w i is selected from the representative vector table 48 by the representative vector selection unit 49 using each symbol of the upper code. The selected representative vector w i is subjected to exclusive OR processing by the EXOR 28 with respect to the code word of the main member code, and the code word c of the code K I is obtained.

(3) Example of Decoding Procedure An example of the decoding algorithm of the code K I will be given below.

First, in the first step, the likelihood (likelihood) sequence θ obtained from the received sequence is hard-decided to generate a received word. In the second step, conversion using the mapping ψ is performed for each n-bit block (received symbol) of hard decision bits, and one constituent symbol (decoding target symbol) of the upper code is estimated. Do this N times. In the third step, the codeword (intermediate received word) composed of the N upper codeword estimation symbols obtained in the second step is subjected to Bounded Distance Decodin
g; BDD).

In the fourth step, the N upper codeword symbols decoded in the third step are converted into a vector w j by the mapping φ. In the fifth step, the maximum likelihood decoding (Maximum Likelihood Decoding; MLD) of the member code using the N vectors w j obtained in the fourth step is performed for every n bits of the likelihood sequence θ to estimate the member code word. I do. In the sixth step, information symbols are extracted from each codeword estimated in the third step and the fifth step, and the decoding is completed.

Here, an error correction code decoding device for the code K I will be described. FIG. 4 is a diagram showing the configuration of an error correction code decoding apparatus capable of decoding the reception sequence coded by the code K I. In the figure, θ is a reception likelihood sequence, R is a hard decision symbol sequence, V ′ is an estimated upper code symbol sequence, c s is an estimated upper code codeword, cm is an estimated member code codeword, and w i Is a selected representative vector, 55 is a hard decision unit, 56 is an upper code symbol estimation unit, 57 is an upper code decoder, 58 is a member code maximum likelihood decoder, 48 is a representative vector table, and 49 is a representative vector selection. It is a department.

Next, the operation will be described. The reception likelihood sequence θ is subjected to a hard decision for each bit by the hard decision unit 55, and a hard decision symbol sequence R is obtained. An estimated upper code symbol sequence V ′ is obtained from the hard decision symbol sequence R by using the upper code symbol estimation unit 56. The obtained estimated upper code symbol string V ′ is input to the upper code decoder 57, and the estimated upper code symbol c
s is obtained. Each symbol of the estimated codeword c s of the upper code is input to the representative vector selection unit 49, and the corresponding representative vector w i is selected from the representative vector table 48. On the other hand, the reception likelihood sequence θ is the selected representative vector w
It is input to the member code maximum likelihood decoder 58 using i , and the estimated member code codeword cm is obtained. That is, the maximum likelihood decoder 58 selects an estimated symbol for each symbol forming the received word from the code words belonging to the member code f i selected by the representative vector w i .

(4) Effects According to the code K I which is an embodiment of the present invention, maximum likelihood decoding can be easily applied, whereby the decoding error characteristic can be improved.

In the third step of the decoding algorithm of the code K I , the received symbol sequence of the upper code estimated from the hard decision symbol sequence is subjected to the limit distance decoding. The representative vector w i of the corresponding member code is calculated in step 4 of the decoding algorithm from each symbol V i of the code word of the estimated upper code,

## EQU9 ## w i = φ (V i ) (13) is determined. If this w i is correctly determined, the class of the member code to which the received word r i of the member code belongs is correctly determined. Therefore, even when the maximum likelihood decoding method is applied to r i , an excellent decoding error rate is obtained. It is possible to obtain the characteristics. Usually, the error correction capability of the upper code is set so that the block error rate is sufficiently small with respect to the error rate of the defined communication path, so that the erroneous determination rate of w i can be suppressed to be small.

The code length n of the member code is usually n =
Since the transmission rate k / n is set to about k / n≈1 / 2 for about 7 to 20, the total number of tables to be held is 2 10 at most even when maximum likelihood decoding using a codeword table is used. =
It is about 1024 and can be easily realized. Note that this is determined independently of the code length of code K I.

As described above, when decoding the code K I , it is possible to easily apply the maximum likelihood decoding method to the received word of the member code, and it is necessary for the maximum likelihood decoding method. It is also possible to keep the amount of memory required to be extremely small compared to conventional methods. Therefore, the problem that it is difficult to apply the maximum likelihood decoding to the (n, k) code with the transmission rate k / n≈1 / 2 in the conventional method is solved by using the code K I , and the decoding error It is possible to realize the improvement of the rate.

B. First Embodiment In the first embodiment, the main member code C m is an expanded Hamming [2 m , 2 m −m] code, and the upper code C s is a Reed-Solomon [2 m −1, K]. The case will be described.
In this case, each member code is equivalent to a coset with the main member code C m . Note that the extended Hamming code is a perfect code, so the weight of the coset leader for each coset can be represented by 2 or less.

In order to design the code K I , it is necessary to set mappings φ and ψ for mapping between the symbols of the member code and the upper code represented by the equation (3). First, the representative vector w i indicating the class to which the member code belongs
And defined in "2 m-dimensional zero vector of weights 2 obtained by adding the overall parity 2 m -1 dimensional vector of weights 1 when the vector · 1 ≦ i ≦ 2 m -1 when · i = 0". This state is shown in FIG.

At this time, the mapping φ is defined as "Galois field GF
The element of (2 m ) is regarded as the syndrome of the Hamming [2 m- 1,2 m- m-1] code, a 2 m -1 dimensional error vector is estimated, and a vector obtained by adding overall parity to this is taken as a value. Yes ”function. Since the Hamming code is a perfect code, the syndrome has a one-to-one correspondence with all 2 m −1 dimensional vectors having a weight of 1 or less. Therefore, it is clear that the mapping φ gives a one-to-one mapping from the elements on the Galois field GF (2 m ) to the representative vector w i of the member code f i .

[0057] When the mapping ψ to "2 m-dimensional vector is included in any of the member code, Hamming [2 m -1, 2 against 2 m -1 dimensional vector excluding the overall parity part m - The value is the syndrome of the m-1] code, while the 2 m -dimensional vector is not included in any of the member codes, the erasure symbol ε is the value. ”
Give by a function.

By the property that the syndromes of linear codes for different cosets are different from each other, and the erasure symbol ε is used as a value when it is not included in any of the member codes, the mapping ψ is a 2 m -dimensional vector from a Galois field. It is clear that it is a mapping onto GF (2 m ) ∪ {ε}. As described above, the constituent parameter of the code K I is determined.

FIG. 6 shows the configuration of the encoding device for the code K I based on this parameter, and FIG. 7 shows the configuration of the decoding device.

In FIG. 6, 60 is a serial input, and 61 is a serial input.
Is a bit after parallel conversion, 62 is an information bit to the extended Hamming encoder, 63 is an information bit to the Reed-Solomon encoder, 64 is a code word bit of the extended Hamming code, and 65 is a Reed-Solomon code. Codeword bit of
Reference numeral 66 is a bit of the representative vector, 67 is a code word bit of the code K I , 68 is a serial output of the code word, 69 is a serial-parallel conversion unit, 70 is an information symbol sequence dividing unit, and 71.
Is an expanded Hamming code encoder, 72 is a Reed-Solomon code encoder, 73 is an expanded Hamming code syndrome decoder, 74 is a bit combination unit, 75 is a parallel-serial conversion unit, and 28 is EXOR.

In the encoding device shown in the figure, the serial input 60 is input to the serial-parallel converter 69 and converted into a parallel signal. The information bits converted into parallel are input to the information symbol sequence dividing unit 70, and are divided into information bits 62 to the extended Hamming encoder and information bits 63 to the Reed-Solomon code encoder 72, which are respectively enlarged. It is input to the Hamming code encoder 71 and the Reed-Solomon code encoder 72. In the expanded Hamming code encoder 71, the input information is expanded Hamming coded and becomes the code word bit 64 of the expanded Hamming code. Further, the information input to the Reed-Solomon code encoder 72 is Reed-Solomon coded and coded into the code word bit 65 of the Reed-Solomon code. The code word bit 65 of the Reed-Solomon code is input to the expanded Hamming code syndrome decoder 73. The expanded Hamming code syndrome decoder 73 has a function of mapping the symbols of the Galois field GF (2 m ) to the representative vector of the member code. The bits 66 of the representative vector determined by the expanded Hamming code syndrome decoder 73 are subjected to exclusive OR processing for each bit by the EXOR 28 with respect to the code word bit 64 of the expanded Hamming code and input to the coded bit combination unit 74. To be done. The coded bit combination unit 74 combines the codewords of the member codes into the codeword bit 67 of the code K I. The code word bit 67 of the code K I is parallel-
The serial converter 75 converts the serial output 68 into a serial output 68, which is output as a code word of the code K I.

Next, the decoding device shown in FIG. 7 will be described. In the figure, 60a is a serial (bit) input, 61a is a bit after parallel conversion, 76 is a likelihood series, and 77 is a likelihood series.
Is the estimated Reed-Solomon code received symbol, 78
Is the estimated Reed-Solomon codeword bit, 79
Is an estimated extended Hamming code codeword bit, 80 is an estimated codeword bit of the code K I , 68a is a serial output of the codeword, 81 is a reception sequence division unit, 82 is an extended Hamming code maximum likelihood decoder, 83 Is a Hamming code syndrome calculator, 84 is a Reed-Solomon code limit distance decoder, 7
3 is an expanded Hamming code syndrome decoder, 85 is an expanded Hamming code codeword table, 74 is a bit combination unit, 7
Reference numeral 5 is a parallel-serial conversion unit, and 28 is an EXOR.

Next, the operation of this decoding device will be described. The serial input 60a (reception sequence) is input to the serial-parallel converter 69 and converted into parallel. The received sequence converted into parallel is the expanded Hamming code maximum likelihood decoder 82.
Is input to the Hamming code syndrome calculation unit 83.
In the Hamming code syndrome calculation unit 83, after making a hard decision on the reception likelihood sequence 76, the syndrome of the Hamming code is calculated and output if it is an even weight vector, and the erasure symbol is output if it is an odd weight vector. . This operation is a Galois field GF (2 m ) ∪ from the 2 m dimensional vector space.
This corresponds to the mapping ψ to {ε}. The estimated Reed-Solomon code received symbol 77 output from the Hamming code syndrome calculation unit 83 is input to the Reed-Solomon code limit distance decoder 84, and is subjected to the limit distance decoding with erasure error correction to perform the Reed-Solomon code code. Word bit 78 is estimated. The estimated Reed-Solomon codeword bit 78 is input to the extended Hamming code syndrome decoder 73. The extended Hamming code syndrome decoder 73 determines the bit 66 of the representative vector. The bit 66 of the determined representative vector is the codeword table 85.
Is added to. The expanded Hamming code maximum likelihood decoder 82 estimates the expanded Hamming code codeword bit 79 using this vector and the reception likelihood sequence 76. The estimated Reed-Solomon code word bit 78 and the estimated expanded Hamming code code word bit 79 are input to the decoded bit combining unit 74 and are used as the estimated code word bit 80 of the code K I. The code word bit 80 is converted into a serial output 68a by the parallel-serial conversion unit 75, and this is output as an estimated code word.

C. Second Embodiment In the second embodiment, the upper code C s is the Galois field GF.
(2 3 ) is an example in which the code K I is configured by using the Reed-Solomon [7,3,5] code above and the main member code C m as the extended Hamming [8,4,4] code. In the present embodiment, the results of measuring the decoding error rate characteristics of the code K I by numerical experiments are also shown.

The member code f i is obtained by using the main member code C m and the representative vector w i .

F i = C m + w i (0 ≦ i ≦ 2 3 −1 = 7) It is expressed by (14). However, the representative vector w i is a vector obtained by adding overall parity bits to one coset reader having a weight of 0 and seven coset readers having a weight of 1 for the coset of the Hamming [7,4,3] code. It is represented by. This state is shown in FIG.

Further, the mapping φ from the constituent symbols of the upper code to the eight-dimensional representative vector is performed by treating the symbols on the Galois field GF (2 3 ) as the syndromes of the Hamming [7,4,3] code. , A function that adds overall parity bits to the estimated vector ”.

Further, the mapping ψ from the 8-dimensional vector to the constituent symbol of the upper code is "When the given 8-dimensional vector is an odd weight vector, the lost symbol is taken as a value, and the given 8-dimensional vector is an even weight vector. Is given as a function whose value is the syndrome of the Hamming [7,4,3] code for the vector obtained by deleting the overall parity bits from this vector.

The decoded block error rate characteristics when the code K I configured as described above is decoded were obtained by numerical experiments. The result is shown by the solid line in FIG. However, the channel is assumed to be an additive white noise (AWGN) channel, and a block error occurs in the upper code (RS [7,3,5] code) of the code K I and the member code (extended Hamming). [8,
4, 4] code) is counted even if the MLD for the code word is erroneous even once in the code word as a decoding block error.

The constructed code K I has a code length n '= 5.
6, the number of information symbols k ′ = 37, and the number of check symbols g = 19. For comparison with the characteristics of this code K I , the shortened BCH
The decoding block error probability when the [56, 38, 7] code is subjected to the limit distance decoding is also shown by a broken line in the figure. As shown in the figure, at the decoded block error probability of 1 × 10 -2 , the code K I is about 0.7 compared to the shortened BCH code.
It has a coding gain of [dB].

D. Third Embodiment The third embodiment is an example in which the main member code is the expanded Golay code C m [24, 12, 8]. In this case, each member code is equivalent to a coset by C m . First, the representative vector w i indicating the class to which the member code belongs
“• when i = 0, a 24-dimensional zero vector, when 1 ≦ i ≦ 23, a vector of weight 2 in which an overall parity bit 1 is added to a 23-dimensional vector of weight 1, and when 24 ≦ i ≦ 276, weight 2 A vector of weight 2 obtained by adding an overall parity bit 0 to a 23-dimensional vector of  and a vector of weight 4 obtained by adding an overall parity bit 1 to a 23-dimensional vector of weight 3 when 277 ≦ i ≦ 2047. This state is shown in FIG.

By using the same mappings φ and ψ as when the main member code is the extended Hamming code, it is possible to realize the mapping between the member code and the symbol of the upper code on the Galois field GF (2 12 ), The upper code is Galois field GF
The code K I can be constructed by using the symbol error code above (2 12 ) such as the Reed-Solomon code.

[Brief description of drawings]

FIG. 1 is a diagram showing a relationship between a member code and a representative vector.

FIG. 2 is a diagram illustrating a coding procedure of a code K I.

FIG. 3 is a diagram showing a configuration of an error correction coding apparatus for a code K I.

FIG. 4 is a diagram showing a configuration of an error correction code decoding apparatus for a code K I.

FIG. 5 is a diagram showing a representative vector of each member code when the main member code is an expanded Hamming code.

FIG. 6 is a diagram showing a configuration of an error correction coding apparatus according to the first embodiment.

FIG. 7 is a diagram showing a configuration of an error correction code decoding apparatus according to the first embodiment.

FIG. 8 is a diagram showing a representative vector w i of each member code when the expanded Hamming [8,4,4] code is used as a main member code.

[Fig. 9] Reed-Solomon [7, 3, 5] with upper symbols
It is a figure which shows a decoding block error rate characteristic in case a code and a main member code are extended Hamming [8,4,4] codes.

FIG. 10 is a diagram showing a representative vector of each member code when the main member code is an expanded Golay code.

FIG. 11 is a diagram showing an example of a conventional encoder for a single code.

FIG. 12 is a diagram showing an example of an encoder for a conventional combination code.

FIG. 13 is a diagram showing an example of a conventional encoder for concatenated codes.

FIG. 14 is a diagram showing an example of a decoder using a conventional limit distance decoding method.

FIG. 15 is a diagram showing an example of a decoder using a conventional maximum likelihood decoding method.

[Explanation of symbols]

28 EXOR, 45 Information symbol string division unit, 46
Main member code encoder, 47 Upper code encoder, 4
8 representative vector table, 49 representative vector selection unit, 55 hard decision unit, 56 upper code symbol estimation unit,
57 upper code decoder, 58 member code maximum likelihood decoder, 60, 60a serial input, 68, 68a serial output, 69 serial-parallel conversion section, 70 information symbol sequence division section, 71 extended Hamming code encoder, 72 lead -Solomon code encoder, 73 extended Hamming code syndrome decoder, 74 bit combination unit, 75 parallel-serial conversion unit, 76 likelihood sequence,
81 reception sequence division unit, 82 extended Hamming code maximum likelihood decoder, 83 extended Hamming code syndrome calculation unit, 8
4 Reed-Solomon code decoder, 85 codeword table.

Front page continuation (51) Int.Cl. 7 Identification code FI H03M 13/39 H03M 13/39 H04L 1/00 H04L 1/00 A 1/24 1/24 (56) Reference Masao Kasahara, Toru Haneda, IT99 −41: A few methods for encoding / decoding error correction codes, Technical Report of IEICE [Information Theory], Japan, July 23, 1999, IEICE Technical Report Vol. 99, No. 235, p. 49-54 Toru Haneda, Masao Kasahara, IT99-46: Performance of KI using mapping, IEICE Technical Report [Information Theory], Japan, September 16, 1999, IEICE Technical Report Vol. . 99, No. 295, p. 13-18 Masao Kasahara, IT99-47: Generalized duplicate cyclic code (code KII), IEICE Technical Report [Information Theory], Japan, September 16, 1999, IEICE Technical Report Vol. . 99, No. 295, p. 19-24 (58) Fields surveyed (Int.Cl. 7 , DB name) H03M 13/00 G06F 11/10 H04L 1/00

Claims (12)

(57) [Claims]
1. An error correction coding apparatus for error correction coding an information symbol to generate an error correction codeword, wherein a plurality of original symbols constituting the information symbol are divided into a first symbol group and a second symbol group. An information symbol dividing unit that divides the first symbol group with a predetermined error correction code to generate an intermediate codeword composed of a plurality of intermediate symbols; Code selection information generating means for converting each of the intermediate symbols constituting the code into code selection information for designating one of the predetermined code groups, and reconfiguring the second symbol group into the same number of symbols as the intermediate symbols. Codeword selection information generating means for generating a plurality of codeword selection information corresponding to each of the intermediate symbols by encoding with a predetermined code, and the code corresponding to each of the intermediate symbols. A code word selecting unit that selects one code from the predetermined code group based on selection information, and selects one code word from the selected code based on the corresponding code word selection information; An error correction codeword generating means for generating the error correction codeword based on the codeword selected for each of the symbols;
2. A reception word provisional generation unit for provisionally generating a reception word composed of a plurality of reception symbols based on a reception sequence, and converting each of the reception symbols into a decoding process target symbol in a predetermined error correction code, Intermediate received word generating means for generating an intermediate received word composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding processing target symbol; and the predetermined error correction for the intermediate received word. Error correction code decoding means for decoding by code, code selection information generating means for converting each intermediate symbol constituting the decoded intermediate received word into code selection information designating one of a predetermined code group, and the reception Based on the sequence, select each of the received symbols from the code specified by the code selection information,
A received word regenerating means for regenerating the received word; an information symbol extracting means for extracting an information symbol contained in the received sequence based on the decoded intermediate received word and the regenerated received word; An error correction code decoding apparatus including:
3. An error correction coding method for error correction coding an information symbol to generate an error correction codeword, wherein a plurality of original symbols forming the information symbol are a first symbol group and a second symbol group. An information symbol dividing step of dividing the first symbol group into a predetermined error correction code to generate an intermediate codeword composed of a plurality of intermediate symbols; A code selection information generating step of converting each of the intermediate symbols forming the code into code selection information for designating one of the predetermined code groups; and reconfiguring the second symbol group into the same number of symbols as the intermediate symbols. A codeword selection information generating step of generating a plurality of codeword selection information corresponding to each of the intermediate symbols by encoding with a predetermined code, and for each of the intermediate symbols A code word selection step of selecting one code from the predetermined code group based on the corresponding code selection information and selecting one code word from the selected code based on the corresponding code word selection information. And an error correction codeword generating step of generating the error correction codeword based on the codeword selected for each of the intermediate symbols, the error correction coding method.
4. A received word provisional generating step of provisionally generating a received word composed of a plurality of received symbols based on a received sequence, and converting each of the received symbols into a decoding process target symbol in a predetermined error correction code, An intermediate received word generating step of generating an intermediate received word composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding target symbol; and the predetermined error correction of the intermediate received word. An error correction code decoding step of decoding with a code; a code selection information generation step of converting each intermediate symbol forming the decoded intermediate received word into code selection information designating one of a predetermined code group; Based on the sequence, select each of the received symbols from the code specified by the code selection information,
A received word regeneration step of regenerating the received word; an information symbol extraction step of extracting an information symbol included in the received sequence based on the decoded intermediate received word and the regenerated received word; An error correction code decoding method comprising:
5. A medium in which a program for causing a computer to function as an error correction coding device for error correction coding an information symbol to generate an error correction code word is recorded, wherein a plurality of elements forming the information symbol are recorded. Information symbol dividing means for dividing a symbol into a first symbol group and a second symbol group, and an intermediate code word composed of a plurality of intermediate symbols by encoding the first symbol group with a predetermined error correction code Intermediate codeword generating means, code selection information generating means for converting each intermediate symbol forming the intermediate codeword into code selection information designating one of the predetermined code groups, and the second symbol group as the intermediate Codeword selection information for reconstructing the same number of symbols as the symbols, encoding each with a predetermined code, and generating a plurality of codeword selection information corresponding to each of the intermediate symbols. Generating means, for each of the intermediate symbols, selects one code from the predetermined code group based on the corresponding code selection information, and selected based on the corresponding code word selection information A computer functions as codeword selecting means for selecting one codeword from a code, and error correction codeword generating means for generating the error correction codeword based on the codeword selected for each of the intermediate symbols. A medium on which a program for recording is recorded.
6. A received word tentative generation means for tentatively generating a received word composed of a plurality of received symbols based on a received sequence, and converting each of the received symbols into a decoding process target symbol in a predetermined error correction code, Intermediate received word generating means for generating an intermediate received word composed of the same number of intermediate symbols as the received symbols, each intermediate symbol being the decoding processing target symbol; and the predetermined error correction for the intermediate received word. Error correction code decoding means for decoding by code, code selection information generating means for converting each intermediate symbol constituting the decoded intermediate received word into code selection information designating one of a predetermined code group, and the reception Based on the sequence, select each of the received symbols from the code specified by the code selection information,
A received word regenerating means for regenerating the received word, and an information symbol extracting means for extracting an information symbol included in the reception sequence based on the decoded intermediate received word and the regenerated received word, A medium in which a program for operating a computer is recorded.
7. (V 0, V 1, ... , V N-1) a predetermined error correction code C s each symbol V i codeword V of the composed predetermined Galois field GF (q), (u 0 , u 1 , ..., U m−1 ) a subset of m-dimensional vectors on a predetermined Galois field GF (p) {u}
By mapping the original belonging to, a error correction coding method of configuring the codewords of the error correcting code, V i (i
= 0,1, ..., N-1) is an element of the Galois field GF (q), and u i (i = 0,1, ..., m-1) is the Galois field G.
An error correction coding method in which m of the element of F (p) is a positive integer and the order of the subset {u} is equal to the original number q of the Galois field GF (q).
8. The error correction coding method according to claim 7, wherein the code word V is generated based on a part of an information symbol, and the mapping destination of the symbol V i is based on the rest of the information symbol. An error correction coding method characterized by determining.
9. The error correction coding method according to claim 8, wherein the subset {u} is defined by f 0 ∪f 1 ∪ ... ∪f H-1 , and the f 0 and f i = f 0 + w. i (i = 1, 2, ..., H-
1) is an error correction code on the Galois field GF (p), and w i is f i ∩f j ≠ {φ} (i ≠ j; i, j = 0,
1, 2, ..., H-1), which is an m-dimensional vector on the Galois field GF (p) defined as
One of the error correction codes f 0 to f H-1 is selected based on i , and the selected error correction code f j (j = 0, 1, 2, ..., Based on the remaining part of the information symbol. An error correction coding method, characterized in that one of the code words belonging to H-1) is selected as the mapping destination.
10. Based on a received word r of (r 0 , r 1 , ..., R N-1 ), r i (i = 0, 1, ..., N-1) is defined as a predetermined Galois field GF (q). An N- dimensional vector of (R 0 , R 1 , ..., RN-1 ) is generated by mapping to the union set with the erasure symbol {ε}, and the N- dimensional vector is converted into the Galois field G.
Decoding by the predetermined error correction code C s on F (q),
An error correction code decoding method for generating an estimated code word V ′ of (V 0 ′, V 1 ′, ..., VN−1 ′), wherein r i (i
= 0, 1, ..., N−1) is an m-dimensional vector on the Galois field GF (p).
11. The error correction code decoding method according to claim 10, wherein the reception likelihoods (θ 0 , θ 1 , ..., θ) corresponding to the received word r.
N-1 ) is acquired, and each r i (i = 0, 1, 2, ..., N-1) is (u) based on the reception likelihood (θ 0 , θ 1 , ..., θ N-1 ). 0 , u 1 , ..., U m-1 ) given Galois field GF (p)
An error correction code decoding method characterized by performing maximum likelihood decoding for each r i as belonging to a subset {u} of the above m-dimensional vector.
12. The error correction code decoding method according to claim 11, wherein the subset {u} is defined by f 0 ∪f 1 ∪ ... ∪f H-1 , and the f 0 and f i = f 0 + w. i (i = 1, 2, ..., H-
1) is an error correction code on the Galois field GF (p), and w i is f i ∩f j ≠ {φ} (i ≠ j; i, j = 0,
1, 2, ..., H-1) is an m-dimensional vector on the Galois field GF (p), which is defined as V i '(i
= 0,1,2, ..., the select one of the error correction code f 0 to f H-1 based on the N-1), is r i corresponding to the V i, the error correction code selected f j (j = 0, 1,
2, ..., N-1), which is one of the codewords belonging to
An error correction code decoding method characterized by performing maximum likelihood decoding for the r i .
JP20719599A 1999-07-22 1999-07-22 Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium Expired - Fee Related JP3451221B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP20719599A JP3451221B2 (en) 1999-07-22 1999-07-22 Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP20719599A JP3451221B2 (en) 1999-07-22 1999-07-22 Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium

Publications (2)

Publication Number Publication Date
JP2001036417A JP2001036417A (en) 2001-02-09
JP3451221B2 true JP3451221B2 (en) 2003-09-29

Family

ID=16535827

Family Applications (1)

Application Number Title Priority Date Filing Date
JP20719599A Expired - Fee Related JP3451221B2 (en) 1999-07-22 1999-07-22 Error correction coding apparatus, method and medium, and error correction code decoding apparatus, method and medium

Country Status (1)

Country Link
JP (1) JP3451221B2 (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6320520B1 (en) * 1998-09-23 2001-11-20 Digital Fountain Information additive group code generator and decoder for communications systems
US6307487B1 (en) 1998-09-23 2001-10-23 Digital Fountain, Inc. Information additive code generator and decoder for communication systems
US7068729B2 (en) 2001-12-21 2006-06-27 Digital Fountain, Inc. Multi-stage code generator and decoder for communication systems
US9240810B2 (en) 2002-06-11 2016-01-19 Digital Fountain, Inc. Systems and processes for decoding chain reaction codes through inactivation
US9419749B2 (en) 2009-08-19 2016-08-16 Qualcomm Incorporated Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes
US9288010B2 (en) 2009-08-19 2016-03-15 Qualcomm Incorporated Universal file delivery methods for providing unequal error protection and bundled file delivery services
JP3973026B2 (en) * 2002-08-30 2007-09-05 富士通株式会社 Decoding device, decoding method, and program for causing processor to perform the method
KR101143282B1 (en) 2002-10-05 2012-05-08 디지털 파운튼, 인크. Systematic encoding and decoding of chain reaction codes
US7139960B2 (en) 2003-10-06 2006-11-21 Digital Fountain, Inc. Error-correcting multi-stage code generator and decoder for communication systems having single transmitters or multiple transmitters
JP4971144B2 (en) 2004-05-07 2012-07-11 デジタル ファウンテン, インコーポレイテッド File download and streaming system
US9432433B2 (en) 2006-06-09 2016-08-30 Qualcomm Incorporated Enhanced block-request streaming system using signaling or block creation
US9380096B2 (en) 2006-06-09 2016-06-28 Qualcomm Incorporated Enhanced block-request streaming system for handling low-latency streaming
US9178535B2 (en) 2006-06-09 2015-11-03 Digital Fountain, Inc. Dynamic stream interleaving and sub-stream based delivery
US9386064B2 (en) 2006-06-09 2016-07-05 Qualcomm Incorporated Enhanced block-request streaming using URL templates and construction rules
US9209934B2 (en) 2006-06-09 2015-12-08 Qualcomm Incorporated Enhanced block-request streaming using cooperative parallel HTTP and forward error correction
US9136983B2 (en) 2006-02-13 2015-09-15 Digital Fountain, Inc. Streaming and buffering using variable FEC overhead and protection periods
US9270414B2 (en) 2006-02-21 2016-02-23 Digital Fountain, Inc. Multiple-field based code generator and decoder for communications systems
JP4662367B2 (en) * 2006-04-18 2011-03-30 共同印刷株式会社 Information symbol encoding method and apparatus, information symbol decoding method and decoding apparatus
WO2007134196A2 (en) 2006-05-10 2007-11-22 Digital Fountain, Inc. Code generator and decoder using hybrid codes
RU2010114256A (en) 2007-09-12 2011-10-20 Диджитал Фаунтин, Инк. (Us) Formation and transmission of original identification information to ensure reliable data exchange
EP2178215A1 (en) * 2008-10-16 2010-04-21 Thomson Licensing Method for error correction and error detection of modified array codes
US9281847B2 (en) 2009-02-27 2016-03-08 Qualcomm Incorporated Mobile reception of digital video broadcasting—terrestrial services
US9917874B2 (en) 2009-09-22 2018-03-13 Qualcomm Incorporated Enhanced block-request streaming using block partitioning or request controls for improved client-side handling
US20110280311A1 (en) 2010-05-13 2011-11-17 Qualcomm Incorporated One-stream coding for asymmetric stereo video
US9596447B2 (en) 2010-07-21 2017-03-14 Qualcomm Incorporated Providing frame packing type information for video coding
US9319448B2 (en) 2010-08-10 2016-04-19 Qualcomm Incorporated Trick modes for network streaming of coded multimedia data
US9270299B2 (en) 2011-02-11 2016-02-23 Qualcomm Incorporated Encoding and decoding using elastic codes with flexible source block mapping
US8958375B2 (en) 2011-02-11 2015-02-17 Qualcomm Incorporated Framing for an improved radio link protocol including FEC
US9253233B2 (en) 2011-08-31 2016-02-02 Qualcomm Incorporated Switch signaling methods providing improved switching between representations for adaptive HTTP streaming
US9843844B2 (en) 2011-10-05 2017-12-12 Qualcomm Incorporated Network streaming of media data
US9294226B2 (en) 2012-03-26 2016-03-22 Qualcomm Incorporated Universal object delivery and template-based file delivery

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
笠原正雄,IT99−47:一般化重複巡回符号(符号KII)について,電子情報通信学会技術研究報告[情報理論],日本,1999年 9月16日,信学技報Vol.99、No.295,p.19−24
笠原正雄、羽田亨,IT99−41:誤り訂正符号の符号化・復号に関する二,三の手法,電子情報通学会技術研究報告[情報理論],日本,1999年 7月23日,信学技報Vol.99,No.235,p.49−54
羽田亨、笠原正雄,IT99−46:マッピングを利用したKIのパフォーマンス,電子情報通信学会技術研究報告[情報理論],日本,1999年 9月16日,信学技報Vol.99,No.295,p.13−18

Also Published As

Publication number Publication date
JP2001036417A (en) 2001-02-09

Similar Documents

Publication Publication Date Title
US10686473B2 (en) Encoding method and apparatus using CRC code and polar code
RU2571587C2 (en) Method and device for encoding and decoding data in convoluted polar code
JP5524287B2 (en) In-place transform with application to encoding and decoding of various code classes
AU2017326022B2 (en) Method and apparatus for encoding data using a polar code
RU2595542C2 (en) Device and method for transmitting and receiving data in communication/broadcasting system
Schmidt et al. Collaborative decoding of interleaved Reed–Solomon codes and concatenated code designs
EP1980041B1 (en) Multiple-field based code generator and decoder for communications systems
EP0728390B1 (en) Method and apparatus for decoder optimization
US7293222B2 (en) Systems and processes for fast encoding of hamming codes
JP3544033B2 (en) Punctured convolutional encoding method and apparatus
US6769091B2 (en) Encoding method and apparatus using squished trellis codes
US7260766B2 (en) Iterative decoding process
US6477680B2 (en) Area-efficient convolutional decoder
US6543023B2 (en) Parity-check coding for efficient processing of decoder error events in data storage, communication and other systems
JP3328093B2 (en) Error correction device
CN1210872C (en) Reduced search symbol estimation algorithm
JP4773356B2 (en) Error correcting multi-stage code generator and decoder for a communication system having a single transmitter or multiple transmitters
US6694478B1 (en) Low delay channel codes for correcting bursts of lost packets
CN100355201C (en) Reduced soft output information packet selection
CN1770639B (en) Concatenated iterative and algebraic coding
Honary et al. Trellis decoding of block codes: A practical approach
KR101297060B1 (en) Multidimensional block encoder with sub-block interleaver and de-interleaver
JP3917563B2 (en) Method and system for decoding low density parity check (LDPC) codes
US7956772B2 (en) Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes
CN101553990B (en) Determination of interleaver sizes for turbo codes

Legal Events

Date Code Title Description
TRDD Decision of grant or rejection written
R150 Certificate of patent or registration of utility model

Ref document number: 3451221

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080711

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090711

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100711

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100711

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110711

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110711

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120711

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130711

Year of fee payment: 10

LAPS Cancellation because of no payment of annual fees