GB2426671A - Error correction decoder using parity check equations and Gaussian reduction - Google Patents

Error correction decoder using parity check equations and Gaussian reduction Download PDF

Info

Publication number
GB2426671A
GB2426671A GB0502613A GB0502613A GB2426671A GB 2426671 A GB2426671 A GB 2426671A GB 0502613 A GB0502613 A GB 0502613A GB 0502613 A GB0502613 A GB 0502613A GB 2426671 A GB2426671 A GB 2426671A
Authority
GB
United Kingdom
Prior art keywords
symbols
codeword
decoder
euclidean distance
received
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0502613A
Other versions
GB2426671B (en
GB0502613D0 (en
Inventor
Martin Tomlinson
Cenjung Tjhai
Mohammed Zaki Ahmed
Marcel Adrian Ambroze
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0502613A priority Critical patent/GB2426671B/en
Publication of GB0502613D0 publication Critical patent/GB0502613D0/en
Publication of GB2426671A publication Critical patent/GB2426671A/en
Application granted granted Critical
Publication of GB2426671B publication Critical patent/GB2426671B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • H03M13/451Soft decoding, i.e. using symbol reliability information using a set of candidate code words, e.g. ordered statistics decoding [OSD]
    • H03M13/453Soft decoding, i.e. using symbol reliability information using a set of candidate code words, e.g. ordered statistics decoding [OSD] wherein the candidate code words are obtained by an algebraic decoder, e.g. Chase decoding
    • H03M13/455Soft decoding, i.e. using symbol reliability information using a set of candidate code words, e.g. ordered statistics decoding [OSD] wherein the candidate code words are obtained by an algebraic decoder, e.g. Chase decoding using a set of erasure patterns or successive erasure decoding, e.g. generalized minimum distance [GMD] decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/11Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
    • H03M13/1102Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
    • H03M13/1191Codes on graphs other than LDPC codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/3738Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with judging correct decoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/61Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
    • H03M13/615Use of computational or mathematical techniques
    • H03M13/616Matrix operations, especially for generator matrices or check matrices, e.g. column or row permutations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/152Bose-Chaudhuri-Hocquenghem [BCH] codes

Abstract

This invention is concerned with an in place decoder using ordered symbol reliabilities which provides improvements in both performance and implementation of decoders for (n,k) linear, block, error correcting codes. For relatively short codeword lengths (<1000 symbols) the combination of powerful codes and the ordered symbol reliabilities decoder can produce a codeword decoder error rate (Frame Error Rate, FER) better than most known decoding arrangements including iterative decoders. The decoder is an in place decoder which reduces implementation complexity. In its operation the decoder considers the least reliable symbols (for each received vector) as erasures and derives parity check equations in order to solve for these least reliable symbols. This procedure ensures that any parity check derivation failure is confined to the more reliable symbols thereby outperforming alternative methods. The decoder takes into account the possibility of errors in the symbols not considered as erasures and applies hypothetical error patterns to these symbols in order corresponding to the probability of these error patterns. In this way the decoder has a minimum FER for a given number of evaluated codewords. It is shown that the hypothetical error patterns may be determined from the sign of the partial cross correlation products between the received vector and evaluated codewords. Output codeword selection is based upon minimum squared Euclidean distance between evaluated codewords and the received vector. It is shown that extrinsic information maybe used in the symbol reliability ordering to reduce the required number of evaluated codewords for a given FER. Additionally it is shown that fast decoding may be realised by carrying out decoding after k or more symbols have been received, considering the symbols yet to be received as erased symbols.

Description

IMPROVED ERROR CORRECTION DECODER This invention is concerned with an in place decoder using ordered symbol reliabilities which provides improvements in both performance and implementation of decoders for error correcting codes.
BACKGROUND The performance of an error correcting system depends firstly on the distance properties of the codewords of the error correcting code [1,2,3] that is being used and secondly on the type of decoder being deployed. The idea of exploiting the ordering of the received symbols of a codeword in terms of their perceived reliability in a decoder is an old one [4]. In a recent paper [5] Fossorier and Lin have shown that for relatively short codeword lengths (<1000 symbols) the combination of powerful codes and an ordered statistic decoder can produce a codeword decoder error rate (Frame Error Rate, FER ) better than most iterative decoding arrangements.Moreover because the technique may be applied to near optimum and theoretically designed codes there are no error floor problems as there are in iterative decoders using weaker codes, and the performance is predictable beyond the range of performance evaluations provided by computer simulations.
are n-k parity symbols in each codeword and each of these is determined by a unique parity check equation. The symbols involved in each equation are defined by a parity check matrix known as the H matrix. There are several ways in which the reliability of received symbols may be assessed depending on the nature of the transmission impairments. The most common impairment is the addition of white Gaussian noise (AWGN) but other impairments are possible such as interference, deliberate jamming, transmission distortion or even complete transmission link loss for some symbols.
DESCRIPTION OF THE INVENTION The transmitted codeword is represented as
The coefficients of the codeword take on values from a finite alphabet. The binary alphabet is most common with coefficients only having values 0 or 1. The codeword is received as
In the decoder the most reliable symbols of the received codeword are used to form a hard decision on die transmitted
the hard decision is correct. The metric may be any reliability function as long as the function is always positive and monotonic with respect to the reliability of the symbol. The received codeword becomes:
where z1 denotes an erasure.
produce a new set of n-k-1 equations. The procedure repeats using the first of the non flagged equations that contains the next
The procedure is a form of Gaussian reduction of die parity check equations.
remain that are not in flagged equations. In the event that no non flagged equations remain containing the erased symbol Zfail, this symbol is replaced with the least reliable
each failure to solve an erased symbol, the sy.ubol is replaced with the next least reliable (but untried) of the set of k unerased symbols. In practice the number of failures of this type is small and depends upon the error correcting code that is used. For a Maximum Distance Separable code [3] no failures of tlii.; type can occur. For other codes such as a BCH code [1,2,3] a typical number of such failures where erased symbols cannot be solved and are replaced/unerased is 2 or 3. These failures, if they occur, have high probability that they are the last to be solved. Correspondingly it is a feature of the invention that since the erased symbols are soivedinthe order of their unreliability the erased symbols that cannot be solved are the most reliable of the erased bits and unerasing these symbols produces minimal degradation to the performance of the decoder.As an example Fig 2 shows the probability of an erased bit not being able to be solved from the most reliable bit (of the erased bits) onwards for tile
than 10-3. Once all parity check equations have been flagged the erased symbols are determined from the unerased symbols. Starting with the
the remaining flagged equations. A block schematic of this stage of the decoder is shown in Fig 1. The received symbols are stored in the shift register with the erased symbols being replaced by the unknowns z; . The Gaussian reduced equations are computed and used to define the connection of symbol adders from each respective shift register stage to compute the outputs d, through to d,,The non erased symbols contained in the shift register arc switched directly through to their respective outputs so that overall, the decoded codeword containing no erased symbols is present at the outputs d, through to dn. As an example of the decoder consider the (15,7) binary code. The code length is 15 and there are 7 information bits encoded into a total of 15 bits. The transmitted sequence is represented as
parity check equations. There are 8 parity check equations which define 8 of the coefficients c, :
The 7 information bits may be distributed amongst the 15 coefficients in several ways, the usual convention is that the coefficients Co through to C6 are set equal to the information bits and c7 through to c14 are parity check bits derived from Co through to eg. As an alternative representation, the parity check equations may be represented as a parity check matrix, the H matrix:
where in each row of the matrix the position of the i's indicate the coefficients used in the parity check equation corresponding to that row. The received codeword R(x) is
Alter substitution with unknowns in the parity check equations, the following set of equations are obtained:
Starting with the least reliable symbol z5, equation (A3) is flagged and subtracted from equations (A5) and (A6) only, because z5 is not contained in the other equations. The new set of equations obtained is as follows:
The * represents the flagging of equation (B3) meaning that this equation will be fixed and used to solve for z5. The number by the side of the *represents the order in which flagging occurred. Carrying out the procedure for the next least reliable symbol Z, equation (Bl) is flagged and subtracted from equations (B4), (B5) and (B6) to produce
Carrying out the procedure for the next least reliable symbol Z9, equation (C5) is nagged and subtracted from equations (C6) and (C7) to produce
Carrying out the procedure for the next Jeast reliable symbol Z1, equation (D2) is flagged and subtracted from equations (D4) and (D7) to produce
Carrying out the procedure for the next least reliable symbol Z12, equation (EG) is flagged and this symbol is not in any other unflagged equations.The next least reliable symbol Z8 is contained in unflagged equation (E4) which is flagged and is subtracted from equations (E7) and (E8) to produce
Carrying out the procedure for the next least reliable symbol z0, equation (F8) is flagged and this symbol is not in any other unflagged equations. The last least reliable symbol Z14 is not contained in the last unflagged equation (F7) and so Z14 is unerased to
equation (F7) and so the completed set of equations is
The last flagged equation (G7) is solved first for Z10 which is substituted back into equation (G4). The penultimate flagged *7 equation (G8) is solved next for Zo which is substituted back into equations (G5), (G4), and (G 1). The next equation *6 is solved for Z8 which is substituted back into equations (G2), and (G6).The next equation *5 is solved for z12. The next equation *4 is solved for Z1, which is substituted back into equations (Gl), and (G5). The next equation *3 is solved for Z9 which is substituted back into equation (G3). The next equation *2 is solved for z3 which is substituted back into equation (G3). The last equation * (which is (G3)) is solved for the last unknown z5. Having solved all of the erased symbols a decoded codeword is obtained which is denoted So(x). In the event that all of the unerased symbols are correct then So(x) will be equal to C(x) the transmitted codeword. If any of the unerased symbols are in error then So(x) will not be equal to C(x). To overcome this problem the unerased symbols are systematically altered to alternative states and the solved parity check equations are used to produce other candidate decoded codewords. It is a feature of this invention that the least reliable unerased symbols are altered first so that the order in which the candidate codewords are produced corresponds to the ranked probability of the error combinations that may have occurred in the unerased symbols. For example, a double symbol combination of least reliable unerased symbols may have higher probability of being in error than the most reliable symbol being in error. In this case the double symbol combination of least reliable unerased symbols is altered before the most reliable symbol being altered.In the binary case altering the symbol means simply inverting the unerased bit. The unerased bit bi is replaced with b1+l in the set of solved parity check equations.
being correct from a consideration of the reliability of the unerased bits alone. In order to take into account the reliability of the erased bits the squared Euclidean [2,5] distance between each candidate, decoded codeword Si (x)and the received codeword R(x) is calculated and is denoted by E; . For the binary case E, is given by
Where 5,- (x) has coefficients with values +1 and -1 and is obtained from S;(x) by mapping each coefficient of value 0 to +1 to each coefficient of value I to -1 respectively. Finally the codeword that is selected to be output from the decoder is the codeword Smm (x) which has the smallest squared
from the decoder after a predetermined number of codewords have been obtained by the alterations to the hard decided, unerased symbols or the codeword with the smallest squared Euclidean distance is output from the decoder after a decoder exit criteria has been satisfied.
As candidate codewords are produced by the decoder in decreasing probability of being correct it is straightforward to trade off decoder complexity against performance by limiting the maximum number of codewords. It is a feature of this invention that for a given maximum number of codewords the best achievable decoder performance is obtained. For long codes there is an increasing chance that the unerased symbols contain more errors than searched out by checking error patterns with incrementally increasing numbers of symbols in error. The positions of these errors may be determined by noting that the evaluation of the codeword with the smallest squared Euclidean distance E; corresponds to the determination of the codeword with the highest cross correlation.
negative. For the coefficients j that correspond to the unerased bit positions, the partial products R(x1)Si, (x-J) are all positive because
constitute a codeword. Thus it is possible that there is another codeword U (x) defined as the original codeword modified.
Note that M(x) would normally have low weight and carries out a small modification to S;(x) Note also that with the constraint that the code is linear the modification M(x) is itself a codeword. The idea is that the codeword U (x) produces a cross correlation Y; which is larger than X;
As before U (x) has coefficients with values +1 and -1 and is obtained from U(x) by mapping each coefficient of value 0 to +1 to each coefficient of value I to -1 respectively. For the unerased bit positions some of the partial products R(x')U (x-J) will now be negative because the codeword U(x) is not equal to Si (x). (Previously all unerased bit positions produced positive partial products). Thus these partial products on their own would cause Y; to be less than X,. However for the erased bit positions the previous partial products R(xj)Si (x-J) are not all positive
choice of M (x), the modification codeword, the cross correlation value Y, may be greater than X;. It is a feature of this invention
will not itself be a codeword except by chance. Thus M(x) = Ci (x) for the case where { V(x) + C, (x)} mod 2 has low Hamming distance ,taken over all codewords in the code, i=0 to
A number of procedures may be used to determine M(x) from V(x). A list of low weight codewords may be stored and compared to V(x) with candidate modification codewords Mj(x) selected on the basis of low Hamming weight compared to V(x). Alternatively a moditied parity check matrix may be derived and candidate codewords determined from V(x) with selected bits systematically deleted.
The best modification codeword will produce the maximum increase Wiin the cross correlation value X;
Note that m-j is the binary (0 or 1) coefficient of M(x-j) It is apparent that the performance of the decoder depends upon the reliability of the received symbols. This reliability may be enhanced by taking into account the extrinsic information available from the parity check equations.[6,7]. Since each parity check equation sums to zero, each symbol in an equation is able to provide an extrinsic reliability measure to every other symbol that participates in that equation. This reliability is determined by the weakest of the participating symbols and hence the fewer participating symbols the higher the reliability. Such sparse H matrix codes tend to have poor Hamming distance and are not usually chosen.However there is a class of high minimum distance codes, known as the Kasahara codes [3], which are formed by using an overall binary parity check on bits of an m bit GF(2m) symbol from a GF(2m) error correcting code. These sparse parity check equations are contained within the H matrix of the overall codes and may be used to provide extrinsic information to enhance the reliability of participating bits. It is a feature of this invention that extrinsic information is derived using only the sparse parity check equations of the H matrix to aid the ordering of the reliability of the received symbols. Parity check equations that are dense provide poor extrinsic information and if used for symbol reliability ordering tends to produce worse overall results from the decoder. Consequently only the sparse parity check equations are used in this feature of the invention.An example of a KS (80,40,14) code is given in the Appendix. The first 16 parity check equations of this code are sparse and these alone are used to aid the ordering of the reliability of the received symbols when applying the decoder to this code. In many applications, symbols are transmitted or stored sequentially and correspondingly symbols are received or read sequentially. In order to speed up the decoding of the k symbols of information as soon as k or more symbols have been received or read, the remaining symbols are declared as having been received or read with zero reliability. The n assembled symbols are then decoded into the k information symbols as described above. A number of different criteria may be used to determine that the decoded codeword is likely to be correct such as below threshold Euclidean distance, satisfaction of a preamble or cyclic redundancy check (CRC). Another, straightforward method is to compare the most likely codewords obtained after k, k+1, k+2, k+3 .... symbols received or read. If the last in most likely codewords are identical, then decoding may be terminated.The integer m should be chosen depending on the channel symbol error rate, the code being used and the cost of overall incorrect decoding. Results for Some Typical Codes The Frame Error Rate (FER) performance achieved by the invention has been evaluated by computer simulation for the extended (104,52,20) Quadratic Residue code [3]. The performance is shown in Fig 3 in comparison to the Sphere Packing Bound offset by 0.19 dB to adjust for binary transmission. It can be seen that at 10-4 the FER performance is within 0.2dB of the offset bound. Another performance graph is shown in Fig 4 for the KS(80,40,14) code. This code has some low weight parity check equations as may be seen from the Appendix which lists the parity check matrix for this code. The low weight equations allow the use of extrinsic information to improve effectively the reliability ordering.Fig 4 shows the FER obtained when using 500 codewords with and without the use of extrinsic information, It can be seen that at low Eb/No ratios the extrinsic information improves the performance and at higher Eb/No ratios makes it worse. References [1] W.W.Peterson, Error Correcting Codes, The MIT Press 1961. [2] S.Lin and D.J. Costello, Error Control Coding: Fundamentals and Applications, Prentice-Hall 1983. [3] F.J. MacWilliams and N.J.A. Sloane, The Theory ofError Correcting Codes, North-Holland 1977. [4] B.G. Dorsch, A decoding algorithm for binary and J-ary output channels, IEEE Trans Inform Theory, Vol IT-20,pp391-394, May 1974 [5] M.P.C. Fossorier and S. Lin, Soft-decision decoding of linear block codes based on ordered statistics, IEEE Trans Inform Theory, Vol 41, ppl379-1396, Sept 1995. [6] R.G. Gallager, Low-Density Parity Check Codes. Cambridge MA M.I.T. Press, 1963 [7] N.Wibcrg, H.A. Loeligier and R. Kotter, Codes and Iterative Decoding on General Graphs, Eur Trans
Appendix The parity check (H) matrix below is for a sparse version of the extended Quadratic Residue Code (102,54) binary code[2] having a minimum Hamming distance of 20. The H matrix defines the 54 parity check equations. The -1 symbol at the end of each row is only there to enable the matrix to be easily machine readable; it is not part of the code. The numbers represent the bit positions of the bits involved in each parity check equation.
The notation is that each row contains the positions of bits in that equation. There are 52 rows because there are 52 equations.The k information bits (also 54) may be in any position but traditionally these are in positions 0 to 51. The parity check (H) matrix below is for the KS (80,40) binary code having a minimum Hamming distance of 14.
CjuAIMS

Claims (1)

  1. Claim 1 A decoder for an error correcting code in which the received or read symbols are ordered according to their reliabilities and in which the least reliable symbols are erased and a derivation for their solutions determined from the parity check equations of the error correcting code. The determination involves a procedure in which each equation is marked as to its position in the sequence of solutions and as to the symbol to be solved. Each solved symbol is eliminated from unmarked equations thus far by a form of Gaussian reduction, and the procedure is carried out in order of the symbol reliabilities and any unsolvable symbols are unerased and the next symbol solved . Hard decisions are made for the unerased symbols and the erased symbols are determined from the derived parity check equations in reverse order to their derivation with solved symbols substituted into the remaining derived equations. With the solution of the derived equations a codeword is output together with its squared Euclidean distance from the received or read signal. Claim 2 A decoder according to Claiml,and in which the hard decided unerased symbols are systematically altered in small groups in an order corresponding to the probabilities that the symbols in the small group are all in error. Each alteration to the hard decided unerased symbols produces a new codeword and the codeword with the smallest squared Euclidean distance is retained. The codeword with the smallest squared Euclidean distance is output from the decoder after a predetermined number of alterations have been carried out to the hard decided unerased symbols or the codeword with the smallest squared Euclidean distance is output from the decoder after a decoder exit criteria has been satisfied. Claim 3 A decoder according to Claiml,or Claim2, and in which extrinsic information from the parity check equations has been combined with the reliabilites of the received symbols to form an overall reliability for each symbol prior to ordering the received symbols in terms of their reliability. Claim 4 A decoder according to Claim 1, Claim2, or Claim3, in which a new codeword is obtained from the codeword with the smallest squared Euclidean distance thus far, by considering modifications to the codeword with the smallest squared Euclidean distance thus far. Such modifications are based upon a modification codeword that has a high correpondence to a binary vector which is obtained from the sign of the partial products of the cross correlation of the received vector and the codeword with the smallest squared Euclidean distance thus far. Claim 5 A decoder according to Claim 1, Claim2, Claim 3, or Claim 4 in which the cross correlation between the modification codeword and the partial products of the cross correlation of the received vector and the codeword with the smallest squared Euclidean distance thus far is determined, and in the event that the cross correlation is positive the codeword with the smallest squared Euclidean distance thus far is updated with the modification codeword to produce a new codeword having the smallest squared Euclidean distance thus far. Claim 6 A decoder according to Claiml, Claim2, Claim 3, Claim 4, or Claim5, and in which after k or more symbols have been received , or read, for an (n,k) error correcting code, the remaining symbols of the codeword are considered to be erased and decoding carried out. Codewords are retained which have the smallest squared Euclidean distance corresponding to that number of symbols received. The codeword with the smallest squared Euclidean distance is output from the decoder after a predetermined number of symbols have been received producing the same codeword or the codeword with the smallest squared Euclidean distance is output from tile decoder after a decoder exit criteria has been satisfied.
GB0502613A 2005-02-09 2005-02-09 Improved error correction decoder Expired - Fee Related GB2426671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0502613A GB2426671B (en) 2005-02-09 2005-02-09 Improved error correction decoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0502613A GB2426671B (en) 2005-02-09 2005-02-09 Improved error correction decoder

Publications (3)

Publication Number Publication Date
GB0502613D0 GB0502613D0 (en) 2005-03-16
GB2426671A true GB2426671A (en) 2006-11-29
GB2426671B GB2426671B (en) 2007-09-19

Family

ID=34355992

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0502613A Expired - Fee Related GB2426671B (en) 2005-02-09 2005-02-09 Improved error correction decoder

Country Status (1)

Country Link
GB (1) GB2426671B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6460162B1 (en) * 1998-05-04 2002-10-01 Alcatel Product code iterative decoding
EP1475893A2 (en) * 2003-05-05 2004-11-10 Her Majesty the Queen in Right of Canada as represented by the Minister of Industry through the Communications Research Centre Soft input decoding for linear codes
EP1536568A1 (en) * 2003-11-26 2005-06-01 Matsushita Electric Industrial Co., Ltd. Belief propagation decoder cancelling the exchange of unreliable messages

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6460162B1 (en) * 1998-05-04 2002-10-01 Alcatel Product code iterative decoding
EP1475893A2 (en) * 2003-05-05 2004-11-10 Her Majesty the Queen in Right of Canada as represented by the Minister of Industry through the Communications Research Centre Soft input decoding for linear codes
EP1536568A1 (en) * 2003-11-26 2005-06-01 Matsushita Electric Industrial Co., Ltd. Belief propagation decoder cancelling the exchange of unreliable messages

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE Communications Letters, Sept 2002, Vol. 6, Issue 9, pp 391-393, "Bootstrap decoding of low-density parity-check codes", Nouh, A. & Banihashemi, A. H. *

Also Published As

Publication number Publication date
GB2426671B (en) 2007-09-19
GB0502613D0 (en) 2005-03-16

Similar Documents

Publication Publication Date Title
JP5354979B2 (en) Low density parity check convolutional code (LDPC-CC) encoder and LDPC-CC decoder
US7260766B2 (en) Iterative decoding process
KR101110586B1 (en) Concatenated iterative and algebraic coding
US7246294B2 (en) Method for iterative hard-decision forward error correction decoding
JP3610329B2 (en) Turbo coding method using large minimum distance and system for realizing the same
US7949927B2 (en) Error correction method and apparatus for predetermined error patterns
Mahdavifar et al. On the construction and decoding of concatenated polar codes
US8726137B2 (en) Encoding and decoding methods for expurgated convolutional codes and convolutional turbo codes
EP0728390A1 (en) Method and apparatus for decoder optimization
WO1996008895A9 (en) Method and apparatus for decoder optimization
EP0907256A2 (en) Apparatus for convolutional self-doubly orthogonal encoding and decoding
EP2418796B1 (en) Bitwise reliability indicators from survivor bits in Viterbi decoders
WO2008075004A1 (en) Decoding of serial concatenated codes using erasure patterns
US7231575B2 (en) Apparatus for iterative hard-decision forward error correction decoding
KR20070087518A (en) Hard-decision iteration decoding based on an error-correcting code with a low undetectable error probability
Panem et al. Polynomials in error detection and correction in data communication system
US6986097B1 (en) Method and apparatus for generating parity bits in a forward error correction (FEC) system
RU2295198C1 (en) Code cyclic synchronization method
EP2051384A1 (en) Iterative decoding in a mesh network, corresponding method and system
GB2426671A (en) Error correction decoder using parity check equations and Gaussian reduction
Ratzer Error-correction on non-standard communication channels
JP4116554B2 (en) Turbo decoding method and apparatus for wireless communication
Sonawane et al. Implementation of RS-CC Encoder and Decoder using MATLAB
US7123668B2 (en) Simple detector and method for QPSK symbols
Kukieattikool et al. Variable‐rate staircase codes with RS component codes for optical wireless transmission

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20090209