US20230223958A1 - Bch fast soft decoding beyond the (d-1)/2 bound - Google Patents

Bch fast soft decoding beyond the (d-1)/2 bound Download PDF

Info

Publication number
US20230223958A1
US20230223958A1 US17/647,441 US202217647441A US2023223958A1 US 20230223958 A1 US20230223958 A1 US 20230223958A1 US 202217647441 A US202217647441 A US 202217647441A US 2023223958 A1 US2023223958 A1 US 2023223958A1
Authority
US
United States
Prior art keywords
computing
codeword
polynomial
error
mod
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/647,441
Other versions
US11689221B1 (en
Inventor
Avner Dor
Yaron Shany
Ariel Doubchak
Amit Berman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US17/647,441 priority Critical patent/US11689221B1/en
Priority to DE102022118166.9A priority patent/DE102022118166A1/en
Priority to KR1020220127268A priority patent/KR20230107104A/en
Priority to CN202211410889.1A priority patent/CN116418352A/en
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BERMAN, AMIT, DOR, AVNER, SHANY, YARON, Doubchak, Ariel
Application granted granted Critical
Publication of US11689221B1 publication Critical patent/US11689221B1/en
Publication of US20230223958A1 publication Critical patent/US20230223958A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/152Bose-Chaudhuri-Hocquenghem [BCH] codes
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/1575Direct decoding, e.g. by a direct determination of the error locator polynomial from syndromes and subsequent analysis or by matrix operations involving syndromes, e.g. for codes with a small minimum Hamming distance
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/37Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
    • H03M13/45Soft decoding, i.e. using symbol reliability information
    • H03M13/458Soft decoding, i.e. using symbol reliability information by updating bit probabilities or hard decisions in an iterative fashion for convergence to a final decoding result
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/61Aspects and characteristics of methods and arrangements for error correction or error detection, not provided for otherwise
    • H03M13/615Use of computational or mathematical techniques
    • H03M13/616Matrix operations, especially for generator matrices or check matrices, e.g. column or row permutations
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/05Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
    • H03M13/13Linear codes
    • H03M13/15Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
    • H03M13/151Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
    • H03M13/1525Determination and particular use of error location polynomials
    • H03M13/1535Determination and particular use of error location polynomials using the Euclid algorithm

Definitions

  • Embodiments of the disclosure are directed to algorithms for deterministically decoding Bose-Chaudhuri-Hocquenghem (BCH) codes with up to r errors beyond the (d ⁇ 1)/2 hamming distance in error patterns that occur with very high probability, which improve the raw bit error rate (BER) coverage of BCH and soft-BCH (SBCH) codes.
  • BCH Bose-Chaudhuri-Hocquenghem
  • Other prior-art fast Chase decoders use partial decoding per iteration, but the decoder covers smaller range of error patterns.
  • the fast Chase of Wu, et al. increased soft decoding capability in comparison to Chase soft decoding, which offered an improvement over the classical HD BCH decoder.
  • the prior art algorithms require essentially t+r operations per iteration by processing entire error-locator-polynomial (ELP)-type polynomials, and can decode only when the number of weak bits that are errors ⁇ r+1.
  • ELP error-locator-polynomial
  • Embodiments of the present disclosure provide methods of: (1) finding and proving a dimension bound to the linear space solutions of the (t+r)-key-equations; (2) Reduction of the core processing to a small evaluation set that is linked to an r-size linear basis of the key equations; (3) Vast computational sharing between iterations; and (4) Combinatorial ordering that govern the solution of related linear equations.
  • Embodiments of the present disclosure afford complexity reduction when there are more errors in the set of weak bits.
  • Embodiments of the present disclosure further provide soft decoding capability beyond Wu's algorithm.
  • Algorithms according to embodiments of the present disclosure use r operations per iteration by passing from an evaluation set of a basis to ELP-type polynomials, can decode when the number of weak bits that are errors ⁇ r ⁇ 1, and provide a substantial reduction in complexity as the number of errors in the weak bits increases.
  • a design according to embodiments of the disclosure enables decoding whenever the number of weak bits that are errors ⁇ r+1 and
  • t+r′, when
  • the one row added to B W′′ is an arbitrary odd-square polynomial in the codeword x.
  • the method includes forming the error locating polynomial from coefficients in the set L, and flipping channel hard decisions at error locations found in the received codeword.
  • the method includes terminating the processing of W′ when deg(u(x)) ⁇ 1.
  • the method includes terminating the processing of W′ when the first r′ columns of B W′ are not a transpose of a systematic matrix or deg( ⁇ (x)) ⁇ t+r′.
  • a non-transitory program storage device readable by a computer, tangibly embodying a program of instructions executed by the computer to perform method steps for a Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding.
  • BCH Bose-Chaudhuri-Hocquenghem
  • ⁇ W ⁇ is a set of weak bits in x; constructing a submatrix of r+1 rows from sub matrices of r+1 rows of the subsets of A such that the last column is a linear combination of the other columns; forming a candidate error locating polynomial using coefficients of the minimal monotone basis that result from the constructed submatrix; performing a fast Chien search wherein the candidate error locating polynomial is verified; and flipping channel hard decision at error locations found in the candidate error locating polynomial and returning the decoded codeword x.
  • t+r′, when
  • t+r′, when
  • a computer memory-based product including a memory; and a digital circuit tangibly embodying a program of instructions executed by the computer to perform a method or a Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding.
  • BCH Bose-Chaudhuri-Hocquenghem
  • the memory is at least one of a solid-state drive, a universal flash storage, or a DRAM.
  • FIG. 1 is a flow chart of an error decoding algorithm according to an embodiment of the disclosure.
  • FIG. 2 is a block diagram of a new architecture for implementing an error decoding algorithm, according to an embodiment of the disclosure.
  • FIG. 3 is a block diagram of a system for implementing a new architecture for an error decoding algorithm according to an embodiment of the disclosure.
  • the syndrome polynomial is
  • the receiver tries at first to decode with the standard Berlekamp-Massey (BM) algorithm combined with a Chien search. If it fails it proceeds with a proposed fast soft decoding according to an embodiment of the disclosure.
  • the following algorithm succeeds whenever the number of errors is 1 ⁇ r′ ⁇ r. Initially the soft decoder observes a set W ⁇ A of weak bits. Typically w ⁇
  • the error locator polynomial (ELP) polynomial is defined by:
  • ⁇ *( x ) ⁇ 1 ⁇ j ⁇ t+r (1 ⁇ x ⁇ j ).
  • W can be determined, e.g., by log-likelihood ratios, such that this will be the common case. In fact, the larger
  • a false alarm means any processing, beyond minimal, of a polynomial, checked by the algorithm, which is not the actual ELP. In particular, it includes unnecessarily activating the computationally heavy Chien search.
  • An algorithm according to an embodiment has a built in mechanism that minimizes the usage of Chien search and reduces other verifications when FA emerges.
  • an algorithm according to an embodiment foresees bursts of FAs and detects them with reduced complexity. Such FAs may result from an ELP with multiple errors in the weak bits.
  • each probe requires a Chien search, performed by q ⁇ t products, while an algorithm according to an embodiment requires O(r) products on average, a massive reduction.
  • the proof of the low expected number of Chien search is based on two BCH probability bounds, known as probability bounds 1 and 2 (PB1, PB2), which state that a false alarm probability is upper bounded by q ⁇ 1 , or even q ⁇ s , with s>1 in some cases of interest.
  • the main input of an algorithm according to an embodiment is a random odd-square polynomial b(x) ⁇ F[x]. This is a generalized form of a syndrome polynomial.
  • a polynomial B(x) can be transformed into to a binary vector. For example, if B(x)+1+x+x 3 +x 5 , the binary vector is 110101.
  • stands for the evaluation set of the code, which is an auxiliary calculation that assists in the decoding, and W for the weak bits as explained below.
  • the weak bits are those for which the probability of being correct is low.
  • a Q to be the matrix obtained from A by omitting all rows that are not in Q* and B Q is the unique reduced row echelon (RRE) matrix, also referred to as a semi systematic matrix, whose row space is equal to the row space of A Q .
  • RRE unique reduced row echelon
  • the subsets of W are ordered by a total order, ⁇ , typically lexicographic, e.g. a depth first order, wherein for any W 1 and W 2 , subsets of W such that
  • typically lexicographic, e.g. a depth first order
  • W′′ R(W′) ⁇ W′, which is unique, with
  • Running memory For every W′ ⁇ W and j ⁇
  • (2) Computation sharing For every W′ ⁇ W with
  • An algorithm is a list decoder, which is a decoder whose output a list of codewords.
  • One codeword in the list is the original valid codeword.
  • the output is the set L, which is an array of codewords, of all (r′, ⁇ (x), Z ⁇ (x), ⁇ ) such that:
  • FIG. 1 is a flowchart of an error decoding algorithm according to an embodiment of the disclosure.
  • an algorithm according to an embodiment begins at step 101 by receiving a codeword x.
  • An algorithm computes first, at step 102 , a minimal monotone basis of V: ⁇ i (x) ⁇ 1 ⁇ i ⁇ r+1 ⁇ F[x], and then, at step 103 , computes the matrix A defined above, and computes also:
  • an algorithm goes through every set W′ ⁇ W, with
  • step 107 if u(x) is a scalar in F* (i.e, ⁇ (x) is separable) compute ⁇ ( ⁇ W′) (i.e. Chien search) and deduct from it Z ⁇ (x), ⁇ , otherwise if deg(u(x)) ⁇ 1 the processing of W′ ends at step 109 .
  • step 108 if u(x) is a scalar and
  • t+r′, the pair ( ⁇ (x), Z ⁇ (x), ⁇ ) is added to L.
  • this processing requires O (r) products on average instead of the standard O(r 3 ) in a prior art scheme.
  • ⁇ 1 (x) might be processed, unnecessarily, by an above algorithm according to an embodiment as part of the handling of the subset W 1 .
  • the likelihood of this unwanted occurrence follows from the fact that:
  • the decoder performs the following preliminary step, (s0), prior to (s1) under the following condition with respect to the minimal r′ that satisfies (3):
  • FIG. 2 A decoding system according to an embodiment is shown in FIG. 2 .
  • the codeword is transmitted through a channel 10 with independent and identically distributed transition probability P(z
  • the hard decision decoder 11 receives the channel output and decodes a codeword ⁇ circumflex over (x) ⁇ . Denote the log likelihood ratio of symbol i given the channel value z i as
  • a classic BCH decoder 12 is applied to y. If
  • the classic BCH decoder fails and a BCH soft decoder 13 according to an embodiment is applied.
  • an overview of a BCH soft decoder algorithm is as follows.
  • ⁇ ( x ) b 1 ⁇ 1 ( x )+ b 2 ⁇ 2 ( X )+ . . . b r ⁇ r ( x )+ ⁇ r+1 ( x ).
  • embodiments of the present disclosure can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof.
  • the present disclosure can be implemented in hardware as an application-specific integrated circuit (ASIC), or as a field programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the present disclosure can be implemented in software as an application program tangible embodied on a computer readable program storage device. The application program can be uploaded to, and executed by, a machine comprising any suitable architecture.
  • any memory-based product such as a solid-state drive (SSD), universal flash storage (UFS) products, DRAM modules, etc.
  • SSD solid-state drive
  • UFS universal flash storage
  • FIG. 3 is a block diagram of a system for implementing an erasure correction algorithm that uses a neural network to perform matrix inversion, according to an embodiment of the disclosure.
  • a computer system 31 for implementing the present disclosure can comprise, inter alia, a central processing unit (CPU) or controller 32 , a memory 33 and an input/output (I/O) interface 34 .
  • the computer system 31 is generally coupled through the I/O interface 34 to a display 35 and various input devices 36 such as a mouse and a keyboard.
  • the support circuits can include circuits such as cache, power supplies, clock circuits, and a communication bus.
  • the memory 33 can include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof.
  • the present disclosure can be implemented as a routine 37 that is stored in memory 33 and executed by the CPU or controller 32 to process the signal from the signal source 38 .
  • the computer system 31 is a general purpose computer system that becomes a specific purpose computer system when executing the routine 37 of the present disclosure.
  • embodiments of the present disclosure can be implemented as an ASIC or FPGA 37 that is in signal communication with the CPU or controller 32 to process the signal from the signal source 38 .
  • the computer system 31 also includes an operating system and micro instruction code.
  • the various processes and functions described herein can either be part of the micro instruction code or part of the application program (or combination thereof) which is executed via the operating system.
  • various other peripheral devices can be connected to the computer platform such as an additional data storage device and a printing device.
  • ⁇ ′( X )/ ⁇ ( x ) ⁇ 1 ⁇ j ⁇ s,r(j) is odd ⁇ j ⁇ /(1 ⁇ x ⁇ j ).
  • ⁇ ′( x ) ⁇ 2 ( x ) ⁇ 1 ⁇ j ⁇ s,r(j) is odd ⁇ j ⁇ 1 ⁇ v ⁇ s,r(v) is odd,v ⁇ j (1 ⁇ x ⁇ v ),
  • K an extension field of F that contains all ⁇ (x) roots.
  • ⁇ (x) ⁇ 1 ⁇ j ⁇ s (1 ⁇ x ⁇ h ) r(j) where ⁇ 1 , . . . , ⁇ s ⁇ K* are mutually different and r(j) ⁇ 1.
  • b k ⁇ 1 ⁇ j ⁇ s,r(j) is odd ⁇ j k+1 for all 0 ⁇ k ⁇ N ⁇ 1.
  • V M+1 V M+2 .
  • ⁇ ( x ) ⁇ 1 ⁇ j ⁇ s (1 ⁇ x ⁇ j ) r(j)
  • ⁇ 1 ( x ) ⁇ b ( x ) ⁇ 1 ′( x )(mod x N ).
  • A M ⁇ (N+1) matrix over a field K (a general field with any characteristic)
  • V N is the set of solutions of ⁇ L i :i ⁇ 0, 2, . . . , N ⁇ 2 ⁇ .
  • L 1 is linearly depends on L 0 .
  • the decoder knows the syndromes ⁇ S k ⁇ 0 ⁇ k ⁇ d-2 . Define the syndrome polynomial:
  • ⁇ *( x ) ⁇ 1 ⁇ j ⁇ (1 ⁇ x ⁇ j ) ⁇ [ x].
  • the decoder “knows” this space and can find a basis to it.
  • b k ⁇ 1 ⁇ j ⁇ s,r(j) is odd ⁇ j k+1 for all 0 ⁇ k ⁇ N ⁇ 1.
  • ⁇ 1 , . . . , ⁇ t ⁇ K* are distinct scalars. There exists a 1 , . . . , a M ⁇ F such that
  • a 1 , . . . , a M are unique when M ⁇ N/2. Proof. By the claim above there exists unique a 1 , . . . , a M ⁇ F such that
  • Lemma 15 (Uniqueness Lemma 2 (UL2)).
  • K be an extension field of F that contains all ⁇ (x) roots and all ⁇ (x) roots.
  • ⁇ (x) and ⁇ (x) by:
  • ⁇ ( x ) ⁇ 1 ⁇ j ⁇ t+r (1 ⁇ x ⁇ j )
  • ⁇ ( x ) ⁇ 1 ⁇ j ⁇ t′+r (1 ⁇ x ⁇ j ) r(j)
  • a 1 ⁇ j : j ⁇ B ⁇
  • a 2 ⁇ j : r+1 ⁇ j ⁇ t+r ⁇
  • a 3 ⁇ j : r+1 ⁇ j ⁇ t′+r
  • r(j) is odd ⁇ . It then holds that
  • b and
  • t and
  • t′ ⁇ t ⁇ b.
  • v 0 [b 0 ,1,0, . . . ,0]
  • v 1 [b 1 ,b 0 ,0, . . . ,0]
  • v 2 [b 2 ,b 1 ,b 1 ,1, . . . ,0]
  • v 3 [b 3 ,b 2 ,b 1 ,b 0 ,0, . . . ,0]
  • v 4 [b 4 ,b 3 ,b 2 ,b 1 ,b 0 ,1,0, . . . ,0]
  • v 5 [b 5 ,b 4 ,b 3 ,b 2 ,b 1 ,b 0 ,0, . . . ,0]
  • v 6 [b 6 ,b 5 ,b 4 3 ,b 3 ,b 2 ,b 1 ,b 0 ,1,0, . . . ,0]
  • v N ⁇ 1 [b N ⁇ 1 ,b N ⁇ 2 ,b N ⁇ 3 , . . . ,b 2 ,b 1 ,b 0 ],
  • Lemma 19 (Dimension Bound 4). Take t ⁇ r ⁇ r′ ⁇ r′′ ⁇ 0 and odd-square b(x) ⁇ F[x] and suppose that
  • ⁇ F* is an inverse of a root of ⁇ (x), i.e., (1 ⁇ x)
  • ⁇ 1 , . . . , ⁇ s ⁇ F* are mutually different inverses of roots of ⁇ (x), i.e., (1 ⁇ i ⁇ x)
  • event A is a prototype of an event in the main soft decoding algorithm, wherein a solution to the key equation turns out to be a false ELP candidate, and hence requires some additional complexity. It will be shown that this event has probability close to q ⁇ 1 in first version and close to q ⁇ 2 in a second version. In the second version there are an insignificant number of false candidates and consequently insignificant added complexity due to a false alarm that requires a Chien search.
  • a series of polynomials ⁇ p i (x) ⁇ 1 ⁇ i ⁇ s is called monotone if deg(p i (x)) ⁇ deg(p i+1 (x)) for i ⁇ [s ⁇ 1].
  • B ⁇ p i (x) ⁇ 1 ⁇ i ⁇ s+1 ⁇ F[x] is called monotone basis of W if ⁇ p i (x) ⁇ 1 ⁇ i ⁇ s is a monotone basis of U and p s+1 (x) ⁇ F[x] ⁇ U.
  • B is called minimal monotone basis of W if B is monotone and deg(p s+1 (x)) is minimal among all such bases.

Landscapes

  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computing Systems (AREA)
  • Error Detection And Correction (AREA)

Abstract

A method for Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding includes receiving a codeword x, wherein the received codeword x has τ=t+r errors for some r≥1; computing a minimal monotone basis {λi(x)}1≤i≤r+1⊆F[x] of an affine space V={λ(x)∈F[x]:λ(x)·S(x)=λ′(x) (mod x2t), λ(0)=1, deg(λ(x)≤t+r}, wherein λ(x) is an error locator polynomial and S(x) is a syndrome; computing a matrix A≡(λj(βi))i∈[w],j∈[r+1], wherein W={β1, . . . , βw} is a set of weak bits in x; constructing a submatrix of r+1 rows from sub matrices of r+1 rows of the subsets of A such that the last column is a linear combination of the other columns; forming a candidate error locating polynomial using coefficients of the minimal monotone basis that result from the constructed submatrix; performing a fast Chien search to verify the candidate error locating polynomial; and flipping channel hard decision at error locations found in the candidate error locating polynomial.

Description

    TECHNICAL FIELD
  • Embodiments of the disclosure are directed to algorithms for deterministically decoding Bose-Chaudhuri-Hocquenghem (BCH) codes with up to r errors beyond the (d−1)/2 hamming distance in error patterns that occur with very high probability, which improve the raw bit error rate (BER) coverage of BCH and soft-BCH (SBCH) codes.
  • DISCUSSION OF THE RELATED ART
  • A widely known and broadly used BCH soft decoding scheme due to Chase deterministically decodes BCH codes by randomly flipping weak bits and then performing full hard decision (HD) BCH decoding per flip. Other prior-art fast Chase decoders use partial decoding per iteration, but the decoder covers smaller range of error patterns. The fast Chase of Wu, et al., increased soft decoding capability in comparison to Chase soft decoding, which offered an improvement over the classical HD BCH decoder. However, the prior art algorithms require essentially t+r operations per iteration by processing entire error-locator-polynomial (ELP)-type polynomials, and can decode only when the number of weak bits that are errors≥r+1.
  • SUMMARY
  • Embodiments of the present disclosure provide methods of: (1) finding and proving a dimension bound to the linear space solutions of the (t+r)-key-equations; (2) Reduction of the core processing to a small evaluation set that is linked to an r-size linear basis of the key equations; (3) Vast computational sharing between iterations; and (4) Combinatorial ordering that govern the solution of related linear equations. Embodiments of the present disclosure afford complexity reduction when there are more errors in the set of weak bits. Embodiments of the present disclosure further provide soft decoding capability beyond Wu's algorithm.
  • Algorithms according to embodiments of the present disclosure use r operations per iteration by passing from an evaluation set of a basis to ELP-type polynomials, can decode when the number of weak bits that are errors≥r−1, and provide a substantial reduction in complexity as the number of errors in the weak bits increases. A design according to embodiments of the disclosure enables decoding whenever the number of weak bits that are errors≥r+1 and
  • r · ( w r + 1 ) C ,
  • and also whenever the number of weak bits that are errors≥r−1 and
  • c × n × ( w r ) C ,
  • where w is the number of weak bits, c>0, and C>0 is the complexity budget.
  • According to an embodiment of the disclosure, there is provided a computer-implemented method of Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding, including receiving a codeword x through a communication channel, wherein the received codeword x has τ=t+r errors for some r≥1, wherein t=(d−1)/2 and d is a minimal distance of a BCH code; computing a minimal monotone basis {λi(x)}1≤i≤r+1⊆F[x] of an affine space V={λ(x)∈F[x]:λ(x)·S(x)=λ′(x) (mod x2t), λ(0)=1, deg(λ(x)≤t+r}; wherein λ(x) is an error locator polynomial, S(x) is a syndrome, and F[x]=GF(q) wherein q=2m for m>1; computing a matrix A≡(λji))i∈[w],j∈[r+1], wherein W={β1, . . . , βw} is a set of weak bits in x; and processing for every subset W′⊆W by retrieving from memory a set W″=R(W′), computing BW′ by adding one row to BW″ and performing Gaussian elimination operations on BW′, wherein R(W′) is reliability probabilities of the bits in W′. When a first r′ columns of BW′ are a transpose of a systematic matrix and deg(λ(x))=t+r′, wherein 1≤r′≤r, the method further includes performing computing u(x)=gcd(λ(x), λ′(x)), wherein λ′(x) is a derivative of λ(x); computing λ(Φ\W′) and deducting from it Zλ(x),Φ wherein Zλ(x),Φ={(β∈Φ:λ(β)=0}, when u(x) is a scalar in F*; adding a pair (λ(x), Zλ(x),Φ) to set a L of all (r′, λ(x), Zλ(x),Φ)) such that 1≤r′≤r, λ(x)∈V′r′, |Zλ(x),W|≥r′+1, and Zλ(x),Φ|=t+r′, when |Zλ(x),Φ|=t+r′; and outputting the set L.
  • According to a further embodiment of the disclosure, the one row added to BW″ is an arbitrary odd-square polynomial in the codeword x.
  • According to a further embodiment of the disclosure, the method includes forming the error locating polynomial from coefficients in the set L, and flipping channel hard decisions at error locations found in the received codeword.
  • According to a further embodiment of the disclosure, λ(x)∈Vr′ is unique and λ(β)=0 for every β∈W′, when the first r′ columns of BW′ are a transpose of a systematic matrix.
  • According to a further embodiment of the disclosure, the method includes terminating the processing of W′ when deg(u(x))≥1.
  • According to a further embodiment of the disclosure, the method includes terminating the processing of W′ when the first r′ columns of BW′ are not a transpose of a systematic matrix or deg(λ(x))≠t+r′.
  • According to a further embodiment of the disclosure, the method includes, before computing u(x)=gcd(λ(x),λ′(x)), computing, for every r≥ρ≥r′+2 and a pair (W1, λ1(x)) such that λ(x)∈V′ρ and W1⊆W with |W1|=ρ+1, wherein λ1(x)∈Vρ is a unique polynomial such that λ1(W1)=0, λ1′(β) for every β in W1.
  • According to a further embodiment of the disclosure, the method includes terminating the processing of W1 when for any β in W1, λ1′(β)=0.
  • According to an embodiment of the disclosure, there is provided a non-transitory program storage device readable by a computer, tangibly embodying a program of instructions executed by the computer to perform method steps for a Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding. The method includes receiving a codeword x through a communication channel, wherein the received codeword x has τ=t+r errors for some r≥1, wherein t=(d−1)/2 and d is a minimal distance of a BCH code; computing a minimal monotone basis {λi(x)}1≤i≤r+1⊆F[x] of an affine space V={λ(x)∈F[x]:λ(x)·S(x)=X′(x)(mod x2t), λ(0)=1, deg(λ(x)≤t+r}, wherein λ(x) is an error locator polynomial, S(x) is a syndrome, and F[x]=GF(q) wherein q=2m for m>1; computing a matrix A≡(λji))i∈[w], j∈[r+1], wherein W={β1, . . . , βW} is a set of weak bits in x; constructing a submatrix of r+1 rows from sub matrices of r+1 rows of the subsets of A such that the last column is a linear combination of the other columns; forming a candidate error locating polynomial using coefficients of the minimal monotone basis that result from the constructed submatrix; performing a fast Chien search wherein the candidate error locating polynomial is verified; and flipping channel hard decision at error locations found in the candidate error locating polynomial and returning the decoded codeword x.
  • According to a further embodiment of the disclosure, constructing a submatrix of r+1 rows from sub matrices of r+1 rows of the subsets of A such that the last column is a linear combination of the other columns includes processing for every subset W′⊆W by retrieving from memory a set W″=R(W′), computing BW′ by adding one row to BW″ and performing Gaussian elimination operations on BW′, wherein R(W′) is reliability probabilities of the bits in W′. When a first r′ columns of BW′ are a transpose of a systematic matrix and deg(λ(x))=t+r′, wherein 1≤t′≤r, the method includes performing computing u(x)=gcd(λ(x), λ′(x)), wherein λ′(x) is a derivative of λ(x); computing λ(Φ\W′) and deducting from it Zλ(x),Φ wherein Zλ(x),Φ={β∈Φ:λ(β)=0}, when u(x) is a scalar in F*; adding a pair (λ(x), Zλ(x),Φ) to set a L of all (r′, λ(x), Zλ(x),Φ) such that 1≤r′≤r, λ(x)∈V′r′, |Zλ(x),W|≥r′+1, and |Zλ(x),Φ|=t+r′, when |Zλ(x),Φ|=t+r′; and outputting the set L.
  • According to an embodiment of the disclosure, there is provided a computer memory-based product, including a memory; and a digital circuit tangibly embodying a program of instructions executed by the computer to perform a method or a Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding.
  • According to a further embodiment of the disclosure, the memory is at least one of a solid-state drive, a universal flash storage, or a DRAM.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow chart of an error decoding algorithm according to an embodiment of the disclosure.
  • FIG. 2 is a block diagram of a new architecture for implementing an error decoding algorithm, according to an embodiment of the disclosure.
  • FIG. 3 is a block diagram of a system for implementing a new architecture for an error decoding algorithm according to an embodiment of the disclosure.
  • DETAILED DESCRIPTION Introduction—Part 1
  • Let m>1, q=2m, F=GF(q), d is minimal distance of the BCH code, t=(d−1)/2, and α be primitive elements of F. 1<n<2m is the BCH code length and k=n−2t is the code dimension. Consider a BCH code whose evaluation set is A={α1, . . . , αn}, and parity check matrix is H=(αi·j such that 1≤i≤2t, 1≤j≤n).
  • A codeword X=(x1, . . . , xn)∈GF(2)n was transmitted and a word Y=(y1, . . . , yn)∈GF(2)n is received. The error word is e=Y−X=(e1, . . . , en) and E={αu such that eu=1} is the set of error locations. The decoder computes a standard BCH syndrome: [S0, . . . , Sd-2]T=H·Y=H·e, which is a vector in F(d-1). The syndrome polynomial is

  • S(x)=Σ0≤i≤d-2 S i ·x i.
  • The receiver tries at first to decode with the standard Berlekamp-Massey (BM) algorithm combined with a Chien search. If it fails it proceeds with a proposed fast soft decoding according to an embodiment of the disclosure. Failing BM means that the received word has τ=t+r errors for some r≥1. The set of errors locations is denoted by E0={α1, . . . , ατ}⊆A, where E0 is unknown to the decoder. The following algorithm succeeds whenever the number of errors is 1≤r′≤r. Initially the soft decoder observes a set W⊆A of weak bits. Typically w≡|W|<<n. The error locator polynomial (ELP) polynomial is defined by:

  • λ*(x)=Π1≤j≤t+r(1−x·α j).
  • Set E={1/β:β∈E0}. For β∈F it holds that β∈E iff λ*(β)=0. The task of the following soft decoding algorithm is to first find λ*(x) and then E. Evoking the BCH key equations, the following affine polynomial space is defined:

  • V={λ(x)∈F[x] such that λ(xS(x)=λ′(x)(mod x d-1), and λ(0)=1,deg(λ(x))≤t+r},

  • and

  • U=V+λ*(x).
  • By the above λ*(x)∈V, and it has been proved that dim(U)=dim*(V)≤r, and

  • U={λ(x)∈F[x] such that λ(xS(x)=λ′(x)(mod x d-1), and λ(0)=0,deg(λ(x))≤τ}.
  • Note also that U=V+λ(x) for every λ(x)∈V.
  • When |E∩W|≥r+1, an algorithms according to an embodiment has complexity
  • C ( w , r ) = O ( r · ( w r + 1 ) ) .
  • W can be determined, e.g., by log-likelihood ratios, such that this will be the common case. In fact, the larger |E∩W| is, the faster the algorithm becomes.
  • Introduction—Part 2
  • Following the above notations, set m≥1, q=2m, F=GF(q), and let d=2t+1 be the code minimal distance and t+r (t≥r≥1) the maximal number of errors that ensuing algorithm can correct. This section provides an overview of the BCH soft decoding procedure without the details of the ECC and BCH context, without details of the building of the basis to V, and mathematical proofs.
  • In an embodiment, a false alarm (FA) means any processing, beyond minimal, of a polynomial, checked by the algorithm, which is not the actual ELP. In particular, it includes unnecessarily activating the computationally heavy Chien search. An algorithm according to an embodiment has a built in mechanism that minimizes the usage of Chien search and reduces other verifications when FA emerges. In particular an algorithm according to an embodiment foresees bursts of FAs and detects them with reduced complexity. Such FAs may result from an ELP with multiple errors in the weak bits.
  • In a standard BCH soft decoding algorithm, called a Chase algorithm, each probe requires a Chien search, performed by q×t products, while an algorithm according to an embodiment requires O(r) products on average, a massive reduction. The proof of the low expected number of Chien search is based on two BCH probability bounds, known as probability bounds 1 and 2 (PB1, PB2), which state that a false alarm probability is upper bounded by q−1, or even q−s, with s>1 in some cases of interest.
  • For N≥1, b(x)=Σ0≤k<Nbkxk∈F[x] is called odd-square if for all 0≤k<(N−1)/2: bk 2=b2k+1. In the following overview the main input of an algorithm according to an embodiment is a random odd-square polynomial b(x)∈F[x]. This is a generalized form of a syndrome polynomial.
  • A polynomial B(x) can be transformed into to a binary vector. For example, if B(x)+1+x+x3+x5, the binary vector is 110101.
  • Note that a computation of the GCD (greatest common divisor) of two polynomials of degree≤N with the Euclidean algorithm can be performed with N2 products.
  • A theoretical justification of the algorithms presented below is provided in the Appendix that follows this Detailed Description.
  • Input
  • In this general setting the input of the algorithm is:
  • (1) b(x)∈F[x], an arbitrary odd-square polynomial—this is the binary codeword;
    (2) integers (t, r, n, m) where 2m>n>t≥r≥1, n>w≥r≥1 and F=GF(2m);
    (3) sets W⊆Φ⊆F* wherein F* is a finite field, such that n=|Φ| and w=|W|.
  • Here Φ stands for the evaluation set of the code, which is an auxiliary calculation that assists in the decoding, and W for the weak bits as explained below. The weak bits are those for which the probability of being correct is low.
  • Setting, Notations, Processing Principle, and Running Memory
  • For 0≤r′≤r define:

  • V r′ ≡V 2t,t+r′,b(x)≡{λ(x)∈F[x]:λ(xb(x)=λ′(x)(mod x 2t),deg(λ(x))≤t+r′,λ(0)=1},

  • V′ r′={λ(x)∈F[x]:λ(xb(x)=λ′(x)(mod x 2 t),deg(λ(x))=t+r′,λ(0)=1},

  • V≡V r,
  • and write
    W={β1, . . . , βW}, where the βi are the probabilities and indices of the weak bits.
    Note that it can be assumed without loss of generality that dim(V)=r. ♦
    For every λ(x)∈F[x] and a set U⊆F, define

  • λ(U)={λ(β):β∈U},

  • Z λ(x),U ={β∈U:λ(β)=0}.♦
  • Take 1≤r′≤r. Note that by the uniqueness lemma, if λ(x)εVr′ is separable, and for Z⊆F, |Z|≥r′, Z is a zero set for λ(x), i.e., λ(Z)={0}, then λ(x) is the only polynomial in Vr′ for which Z is a zero set. ♦
  • Definition. For Q⊆W define Q*={i∈[w]:βi∈Q}.
  • Define

  • A≡(λji))i∈[w],j∈[r+1]′,
  • and for Q⊆W define AQ to be the matrix obtained from A by omitting all rows that are not in Q* and BQ is the unique reduced row echelon (RRE) matrix, also referred to as a semi systematic matrix, whose row space is equal to the row space of AQ.♦
  • A matrix B is called systematic if B=[I, C], i.e., B is the concatenation of I and C into one matrix, where I the unit matrix. ♦
  • Set Ordering and the Processing Principle.
  • The subsets of W are ordered by a total order, <, typically lexicographic, e.g. a depth first order, wherein for any W1 and W2, subsets of W such that |Wi|≤r+1, if W1<W2 then W1 is processed before W2. There is a mapping R such that for every W′⊆W, 1≤|W′|≤r+1 there is W″=R(W′)⊆W′, which is unique, with |W″|=|W′|−1, such that the following holds:
  • (1) Running memory. For every W′⊆W and j≡|W′|≤r+1, the running memory stored before W′ is processed, contains {WW′(i):i∈[j]} where Ø=W(0)<W′(1)<W′(2)< . . . <W′(j)=W′ and for i∈[j]: |W′(i)|=i, and R(W(i))=W(i−1), which implies that the running memory is very small.
    (2) Computation sharing. For every W′⊆W with |W′|≤r+1, when W′ is processed the decoder computes at first BW′. It is performed after retrieving from memory the matrix BR(W′), and then performing a minimal amount of delta Gaussian elimination operations to compute BW′. It takes an average of O(r) products per W′.
  • Output
  • An algorithm according to an embodiment is a list decoder, which is a decoder whose output a list of codewords. One codeword in the list is the original valid codeword. The output is the set L, which is an array of codewords, of all (r′, λ(x), Zλ(x),Φ) such that:

  • 1≤r′≤r,λ(x)∈V′ r′ ,|Z λ(x),W |≥r′+1, and |Z λ(x),Φ |t+r′.
  • Steps
  • FIG. 1 is a flowchart of an error decoding algorithm according to an embodiment of the disclosure. Referring now to the figure, an algorithm according to an embodiment begins at step 101 by receiving a codeword x.
  • An algorithm according to an embodiment computes first, at step 102, a minimal monotone basis of V: {λi(x)}1≤i≤r+1⊆F[x], and then, at step 103, computes the matrix A defined above, and computes also:

  • j(β):β∈Φ\W,j∈[r+1]}.
  • Methods for computing the minimal monotone basis of V and the matrix A are known in the art.
  • (ii) At step 104, an algorithm according to an embodiment goes through every set W′⊆W, with |W′|≤r+1, in accordance with the order <. When W′⊆W, with r′+1≡|W′|≤r+1, is processed the decoder retrieves from the running memory W″=R(W′), which is read data and reliability probabilities, and computes a basis BW′ by adding the polynomial vector b(x) as one row to BW″ and performing a minimal number of Gaussian elimination operations to yield a set of codewords. If, at step 105, the first r′ columns of BW′ are a transpose of a systematic matrix, there is an instant check that tells the decoder if there exists a unique λ(x)∈Vr′ such that λ(β)=0 for every β∈W′. If the answer is positive and, deg(λ(x))=t+r′, the following steps take place, otherwise the processing of W′ ends at step 109, where the set L is output.
  • (s1) At step 106, apply the Euclidean algorithm to compute u(x)=gcd(λ(x),λ′(x)).
    (s2) At step 107, if u(x) is a scalar in F* (i.e, λ(x) is separable) compute λ(Φ\W′) (i.e. Chien search) and deduct from it Zλ(x),Φ, otherwise if deg(u(x))≥1 the processing of W′ ends at step 109.
    (s3) At step 108, if u(x) is a scalar and |Zλ(x),Φ|=t+r′, the pair (λ(x), Zλ(x),Φ) is added to L.
  • As mentioned above, this processing requires O (r) products on average instead of the standard O(r3) in a prior art scheme.
  • Comments and Further Reduction of False Alarm in Some Distinct Cases
  • (1) Following (i), in an algorithm according to an embodiment, the computation of λ(U) for λ(x)∈Vr′ and a subset U⊆F, e.g. Chien search when U=Φ, is done in a fast mode that requires r′ products for each β, instead of t+r′ in the standard method. This is due to the fact that λ(x)−λr+1(x) is a linear combination of {λi(x)}1≤i≤r′⊆F[x].
    (2) It follows from the Probability Bound 2 (PB2), described in the appendix below, that in BCH decoding, for W′⊆W, with |W′|=r′+s (s≥1) the probability that there exists λ(x)∈V′r′ which is not the ELP such that λ(W′)={0}, is upper bounded by q−s/(1−q−2). Observe that if s=1, then no product of λ(x) will appear again in the algorithm.
    (3) Suppose that s=a+1, where a≥1 and r≥r′+a+1=r′+s and there exists W′⊆W with |W′|=r′+s, and a separable λ(x)∈V′r′, such that λ(W′)={0}. Such event can be portrayed as an event of an overflow of zeros within W per a polynomial in V, in comparison to its degree.
    (4) It follows from the supposition in (3) that for every 1≤b≤a such that: r′+2b≤r and r′+1+a+b≤w, take any mutually different β1, . . . , βb∈W\W′, and define:

  • λ1(x)≡(1−β1 ·x)2 . . . (1−βa ·x)2·λ(x) and W 1 ≡W′∪{β 1, . . . ,βa}.
  • It holds that λ1(x) might be processed, unnecessarily, by an above algorithm according to an embodiment as part of the handling of the subset W1. The likelihood of this unwanted occurrence follows from the fact that:

  • deg(λ1(x))=t+r 2+2b,W 1 ⊆W,|W 1 |=r′+a+b+1,a≥b, and λ1(W 1)={0}.
  • While the incidence of (3) is very rare in the case that λ(x) is not an ELP, (see (2) above), it can occur sometimes when λ(x) is ELP. It depends on the input of the algorithm. When (3) occurs, for some λ(x)∈V′r′, in an embodiment, the decoder performs the following preliminary step, (s0), prior to (s1) under the following condition with respect to the minimal r′ that satisfies (3):
  • (s0) For every r≥ρ≥r′+2 and a pair (W1, λ1(x)) such that λ(x)∈V′ρ and W1⊆W with |W1|=ρ+1, wherein λ1(x)∈Vρ is the unique polynomial such that λ1(W1)=0, the decoder computes λ1′(β) for every β in W1, and if for any β in W1, λ1′(β)=0, the processor ends the processing of W1.♦
  • Observe that if λ1′(β)=0 for some β in W1 then λ1(x) is not separable. Note also that the computation of λ1′(β) requires only (t+ρ)/2 products.
  • Overview
  • A decoding system according to an embodiment is shown in FIG. 2 . According to an embodiment, denote by x={xi}i=1 n the (n, k, d) BCH code word, where xi∈GF(2), k is the code dimension, n is the code length and d is the BCH code minimal distance. The codeword is transmitted through a channel 10 with independent and identically distributed transition probability P(z|x), where z∈
    Figure US20230223958A1-20230713-P00001
    and x∈GF(2). The hard decision decoder 11 receives the channel output and decodes a codeword {circumflex over (x)}. Denote the log likelihood ratio of symbol i given the channel value zi as
  • R i = log ( P ( z i x = 0 ) P ( z i x = 1 ) ) ,
  • and y as the channel hard decision, where
  • y i = { 0 LLR i 0 1 o . w . .
  • A classic BCH decoder 12 is applied to y. If |{j|xj≠yj for |≤i≤n}|>t, where
  • t = d - 1 2 ,
  • the classic BCH decoder fails and a BCH soft decoder 13 according to an embodiment is applied.
  • According to an embodiment, an overview of a BCH soft decoder algorithm is as follows.
  • Input: z, y Output: {circumflex over (x)}
  • 1. Find a set of w weak bits locations (lowest likelihood ratio):

  • W={β i}1≤i≤wij i ,j i∈[0,n−1].
  • 2. Solution to t+r key equation forms an r dimensional affine space.
    Find a monotone affine basis: Λ={λ1(x) . . . λr+1(x)}.
    In high probability, the ELP is given as affine combination of this basis:

  • λ(x)=b 1·λ1(x)+b 2·λ2(X)+ . . . b r·λr(x)+λr+1(x).
  • 3. Look efficiently for r+1 from w locations that zero the ELP polynomial with some coefficients {bi}1≤i≤r:
    a. Compute the solution matrix:
  • A = { a ij = λ j ( β i ) } 1 i w , 1 j r + 1 = [ a 1 , 1 a 1 , r + 1 a w , 1 a w , r + 1 ]
  • b. Go over all combination of sub matrices of r+1 rows of the subsets of A, to find submatrix of r+1 rows such that the last column is a linear combination of the other columns. This part receives the coefficients of the affine base b and r+1 error locations. This is the main part of the algorithm and it is described in detail above in steps (ii), s1, s2 and s3.
    Computation sharing reduces the complexity of each check from O(r3) to O(r).
    c. Form the candidate ELP using the resulting coefficients.
    4. Fast Chien search to verify the candidate ELP and error locations.
    5. Flip the channel hard decision at the error locations found in step 3 and return the decoded word {circumflex over (x)}.
  • System Implementations
  • It is to be understood that embodiments of the present disclosure can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof. In one embodiment, the present disclosure can be implemented in hardware as an application-specific integrated circuit (ASIC), or as a field programmable gate array (FPGA). In another embodiment, the present disclosure can be implemented in software as an application program tangible embodied on a computer readable program storage device. The application program can be uploaded to, and executed by, a machine comprising any suitable architecture.
  • In addition, methods and implementations of embodiments of the disclosure can be used or incorporated into any memory-based product, such as a solid-state drive (SSD), universal flash storage (UFS) products, DRAM modules, etc.
  • FIG. 3 is a block diagram of a system for implementing an erasure correction algorithm that uses a neural network to perform matrix inversion, according to an embodiment of the disclosure. Referring now to FIG. 3 , a computer system 31 for implementing the present disclosure can comprise, inter alia, a central processing unit (CPU) or controller 32, a memory 33 and an input/output (I/O) interface 34. The computer system 31 is generally coupled through the I/O interface 34 to a display 35 and various input devices 36 such as a mouse and a keyboard. The support circuits can include circuits such as cache, power supplies, clock circuits, and a communication bus. The memory 33 can include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof. The present disclosure can be implemented as a routine 37 that is stored in memory 33 and executed by the CPU or controller 32 to process the signal from the signal source 38. As such, the computer system 31 is a general purpose computer system that becomes a specific purpose computer system when executing the routine 37 of the present disclosure. Alternatively, as described above, embodiments of the present disclosure can be implemented as an ASIC or FPGA 37 that is in signal communication with the CPU or controller 32 to process the signal from the signal source 38.
  • The computer system 31 also includes an operating system and micro instruction code. The various processes and functions described herein can either be part of the micro instruction code or part of the application program (or combination thereof) which is executed via the operating system. In addition, various other peripheral devices can be connected to the computer platform such as an additional data storage device and a printing device.
  • It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures can be implemented in software, the actual connections between the systems components (or the process steps) may differ depending upon the manner in which the present disclosure is programmed. Given the teachings of the present disclosure provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present disclosure.
  • While the present disclosure has been described in detail with reference to exemplary embodiments, those skilled in the art will appreciate that various modifications and substitutions can be made thereto without departing from the spirit and scope of the disclosure as set forth in the appended claims.
  • APPENDIX 1. Analysis of the BCH Key Equations I: Beyond the (D−1)/2 Radius, and the Dimension Equality 1.1 Introduction
  • Here F=GF(2m), m>1 and the empty sum is zero.
  • Definition 1:
  • (i) For an n dimensional vector space V over F and subspace U⊆V, and v∈V we define the dimension of the affine space v+U to be n, and write:

  • dim* F(v+U)=n.
  • (ii) For L≥N≥1, and b(x)=Σ0≤k<Nbkxk, c(x)=Σ0≤k<Lckxk∈F[x] we would denote b(x)≤c(x) if for all 0≤k<N it holds that ck=bk.
    Lemma 1. Take λ(x)∈F[x], where X(0)=1. Let K be an extension field of F that contains all λ(x) roots. Represent λ(x) by: λ(x)=Π1≤j≤s(1−x·αj)r(j) where α1, . . . , αs∈K* are mutually different and r(j)≥1. Then the following equality holds:

  • λ′(X)/λ(x)=Σ1≤j≤s,r(j) is odd αj·/(1−x·α j).
  • Proof. We can write λ(x)=β2(x)·Π1≤j≤s, r(j) is odd (1−x·αj) where β(x)∈K[x]. In other words, every polynomial can be represented uniquely as a product of a square polynomial and α polynomial with roots of multiplicity 1. It then holds that

  • λ′(x)=β2(x)·Σ1≤j≤s,r(j) is odd αj·Ø1≤v≤s,r(v) is odd,v≠j(1−x·α v),

  • and hence:

  • λ′(x)/λ(x)=Σ1≤j≤s,r(j) is odd αj·/(1−x·α j).♦
  • Lemma 2. Take λ(x)∈F[x] with λ(0)=1, and b(x)=Σ0≤j≤N-1bjxj∈F[x]. Let K be an extension field of F that contains all λ(x) roots. Represent λ(x) by: λ(x)=Π1≤j≤s(1−x·αj)r(j) where α1, . . . , αs∈K* are mutually different and r(j)≥1. Then

  • λ(xb(x)=λ′(x)(mod x N) iff  (1)

  • b k1≤j≤s,r(j) is odd αj k+1 for all 0≤k≤N−1.
    Figure US20230223958A1-20230713-Brketopenst
      (2)
  • Note that here we do not assume anything on the degrees of λ(x) and b(x), not even s≤N. Thus it holds even when b(x)=0. Note also that when (2) holds then for 0≤k<(N−1)/2: bk 2=b2k+1.
  • Proof. Since λ(0)=1, λ(x)·b(x)=λ′(x)(mod xN) is equivalent to b(x)=λ′(x)/λ(x (mod xN) which is equivalent to:

  • Σ0≤k<N b k x k=λ′(x)/λ(x)(by lemma 1)

  • 1≤j≤s,r(j) is odd γj·/(1−x·α j)(mod x N)

  • 1≤j≤s,r(j) is odd Σ0≤k x k·αk k+1(mod x N)

  • 0≤k≤N-1 x l·Σ1≤j≤s,r(j) is odd αj k+1(mod x N),
  • and this is equivalent to bk1≤j≤s, r(j) is odd for all 0≤k<N−1. ♦
  • The following lemma enables us to skip the even iterations in the BCH Berlekamp Massey algorithm.
  • Lemma 3. Let λ(x)∈F[x], λ(0)=1. Suppose that N is odd and M=(N−1)/2 and that b(x)=Σ0≤k≤Nbkxk, satisfies bM 2=bN and

  • λ(xb(x)=λ′(x)(mod x N).
  • It then holds that the coefficient of xN in λ(x)·b(x) is zero and

  • λ(xb(x)=λ′(x)(mod x N+1).
  • Proof. Let K be an extension field of F that contains all λ(x) roots. Represent λ(x) by: λ(x)=Π1≤j≤s(1−x·αh)r(j) where α1, . . . , αs∈K* are mutually different and r(j)≥1. By lemma 2

  • b k1≤j≤s,r(j) is odd αj k+1 for all 0≤k≤N−1.
  • In addition,

  • b N =b M 2=(Σ1≤j≤s,r(j) is odd αj M+1)21≤j≤s,r(j) is odd αj 2M+21≤j≤s,r(j) is odd αj N+1.
  • It follows that bk1≤j≤s, r(j) is odd αj k+1 for all 0≤k≤N. Thus by the other direction of lemma 2: λ(x)·b(x)=λ′(x) (mod xN+1). Since all the odd coefficients of λ′(x) are zero, the coefficients of xN in λ′(x) is zero and hence the coefficients of xN in λ(x)·b(x) is zero.♦
  • 1.2 Definitions
  • Definition 2. For N≥1, and b(x)=Σ0≤k<Nbkxk∈F[x], b(x) is odd-square if for all 0≤k<(N−1)/2: bk 2=b2k+1.
    Definition 3. For Σ,N,L,≥1, and b(x)=Σ0≤k<Lbkxk∈F[x], define

  • V N,τ,b(x)={λ(x)∈F[x]: λ(xb(x)=λ′(x)(mod x N),deg(λ(x))≤τ,λ(0)=1}

  • U N,τ,b(x)={λ(x)∈F[x]: λ(xb(x)=λ′(x)(mod x N),deg(λ(x))≤τ}

  • V N,τ,b(x),0={λ(x)∈F[x]: λ(xb(x)=λ′(x)(mod x N),deg(λ(x))≤Σ,λ(0)=0}

  • U N,b(x)={λ(x)∈F[x]: λ(xb(x)=λ′(x)(mod x N)}
  • It is clear that either VN,τ,b(x)=Ø or dim*(VN,τ,b(x))=dim(UN,τ,b(x))−1. By the above lemma that if VN,τ,b(x)≠Ø for some τ and L≤N then b(x) is odd-square. Note that if VN,τ,b(x) is not empty and λ(x) is any element of VN,τ,b(x) then

  • λ(x)+V N,τ,b(x),0 =V N,τ,b(x)
  • which implies that when VN,τ,b(x)≠Ø,

  • dim*(V N,τ,b(x))=dim(V N,τ,b(x),0).
  • 1.3 The Dimension Bound 1 & 2
  • Lemma 4 (Dimension Bound 1). Let τ≥1 and L>N≥1 where N and L are even and b(x)∈F[x] is odd-square, b(x)=Σ0≤k<Lbkxk. Then, if VL,Σ,b(x)≠Ø,

  • dim*(V N,τ,b(x))−dim*(V L,τ,b(x))≤(L−N)/2.
  • Proof. For M≥1 set VM≡VM,τ,b(x). It will be shown by induction on even s∈{0, 1, . . . , L−N} that

  • dim*(V N)−dim*(V N+s)≤s/2.
  • For s=0: take even 0≤s<L−N, and M=N+s and λ(x)∈VM and observe that the M coefficient of p(x)=λ(x)·b(x)−λ′(x) is

  • Σ0≤j≤τλj ·b M−jM+1.
  • Thus VM+1={λ(x)∈VMM+10≤j≤τ λj·bN−j=0}, i.e. VM+1 is (nonempty) affine space which is obtained from VM by one additional linear homogeneous equation. It follows that dim*(VM)≤dim*(VM+1)+1. Next, by the previous lemma when λ(x)·b(x)=λ′(x)(mod xM+1) then

  • λ(xb(x)=λ′(x)(mod x M+2).
  • And hence VM+1=VM+2. Thus shown that dim*(VN+s)≤dim*(VN+s+2)+1♦
    As a corollary we get that:
    Lemma 5 (Dimension Bound 2). Take τ≥1, L=2τ, and L≥N≥1 where N is even, and b(x)∈F[x] is odd-square, b(x)=Σ0≤k<Lbkxk. If there exists a separable σ(x)∈VL,τ,b(x) such that deg(σ(x))=τ, then:

  • dim*(V N,τ,b(x))≤(L−N)/2.
  • Proof. This lemma follows from the previous lemma and from a claim that

  • (*)V≡V L,τ,b(x))={σ(x)}, i.e. dim*(V L,τ,b(x))=0.
  • To prove (*) take any λ(x)∈V and let K be an extension field of F that contains all the roots of σ(x) and λ(x). We can then represent

  • λ(x)=Π1≤j≤s(1−x·α j)r(j)
  • where s≤τ and α1, . . . , αs∈K* are mutually different and r(j)≥1 and r(1)+42)+ . . . +r(s)≤τ, and,

  • σ(x)=Π1≤j≤r(1−x·β j)
  • where β1, . . . , βr∈K* are mutually different. Define A to be the symmetric difference of {β1, . . . , βτ} and {αj:j∈[s], r(j) is odd} [the symmetric difference of two sets is the set of elements which is one of the sets and not in their intersection]. By lemma 2:

  • Σ1≤j≤τj k+1 =b k1≤j≤s,r(j) is odd αj k+1 for all 0≤k≤L−1.
  • That is:

  • 0=Σ1≤j≤τj k+11≤j≤s,r(j) is odd αj k+1α∈Aαk+1 for all 0≤k≤L−1.
  • Note that |A|≤s+τ≤2τ, thus if A≠Ø we get a contradiction since this yields a linear dependency of the columns of a (2τ)×|A| Vandermonde matrix. Therefore A=Ø and hence λ(x)=σ(x)♦
  • 1.4 Uniqueness Lemma 1 (UL1)
  • Note that the following lemma uses the fact that F has characteristic 2.
  • Lemma 6:
  • I. For every λ(x)∈F[x] such that λ(0)=1. There then exists exist unique polynomials λ1(x),u(x)∈F[x], such that:

  • λ1(xu 2(x)=λ(x) and λ1(0)=u(0)=1 and λ1(x) is separable.
  • II. Suppose that λ(x), b(x)∈F[x] satisfy:

  • λ(xb(x)=λ′(x)(mod x N) and with λ(0)=1,
  • and let λ1(x),u(x)∈F[x], be the unique polynomials λ1(x),u(x)∈F[x], such that:

  • λ1(xu 2(x)=λ(x) and λ1(0)=u(0)=1 and λ1(x) is separable,

  • then

  • λ1(xb(x)=λ1′(x)(mod x N) and λ(0)=1.
  • III. Take τ, N≥1, and b(x)∈F[x], and suppose that there is a unique λ(x)∈F[x] such that:

  • λ(xb(x)=λ′(x)(mod x N) and λ(0)=1 and deg(λ(x))≤τ.
  • Then λ(x) is separable.
  • Proof.
  • I. There exist unique λ1(x),u(x)∈K[x], in some extension field K, such that:

  • λ1(xu 2(x)=λ(x) and λ1(0)=u(0)=1.
  • Since u2(x), gcd(λ(x), λ′(x)) and the gcd is computed by the Euclidean algorithm, then u2(x)∈F[x] and hence λ1(x) and u(x) must be in F[x] (and not only in the extension ring K[x]).
    II. It follows from the assumptions of II that:

  • λ1(xu 2(xb(x)=(u 2(x)·λ1(x))′(mod x N)=u 2(x)·λ1′(x)(mod x N).
  • Dividing both sides by u2(x), we get that:

  • λ1(xb(x)=λ1′(x)(mod x N).
  • III. Let λ1(x),u(x)∈F[x], be the unique polynomials λ1(x),u(x)∈F[x], such that:

  • λ1(xu 2(x)=λ(x) and λ1(0)=u(0)=1 and λ1(x) is separable.
  • Then by II

  • λ1(xb(x)=λ1′(x)(mod x N) and λ(0)=1, and clearly: deg(λ1(x))≤τ,
  • and hence by the uniqueness u(x)=1 and thus λ1(x)=λ(x). It follows that λ(x) is separable♦
  • 1.5 A Fundamental Rule of Nonhomogeneous Linear Equations
  • For completeness sake the following known fact is presented.
  • Fact. Let A be M×(N+1) matrix over a field K (a general field with any characteristic), and B the (M+1)×N matrix over K obtained from A by adding one additional row, called v, at the bottom of
    A. If for every x∈R≡{x=[x1, . . . , xN, xN+1]T∈KN+1:xN+1=1} it holds that

  • Ø≠V≡{x∈R:A·x=0}={x∈R:B·x=0}≡V′,
  • then v is in the row space of A.
    Proof. Let

  • U={x=[x 1 , . . . ,x N ,x N+1]T ∈K N+1 :x N+1=0,A·x=0} (the set of solutions to homogeneous equations)

  • U′={x=[x 1 , . . . ,x N ,x N+1]T ∈K N+1 :x N+1=0,B·x=0},
  • C* the matrix obtained from the matrix C by omission of the last column (including the case where C comprises one row).
  • Since Ø≠V′=V then U′=U. It follows that v*=u·A* for some u, a row vector in KM. Put w=v−u·A, then

  • w=[0, . . . ,0,ξ] for some ξ∈K,
  • and w is in the row space of B, and hence for all x∈V′: w·x=0, thus w=0, which implies that v is in the row space of A. ♦
  • 1.6 The Dimension Equality
  • Lemma 7 (The Dimension Equality) Take τ≥1, L=2τ, and L≥N≥1 where N is even, and b(x)=Σ0≤k<Lbkxk∈F[x] is odd-square. If there exists a separable σ(x)∈VL,τ,b(x) such that deg(σ(x))=τ, then:

  • dim*(V N,τ,b(x))=(L−N)/2.
  • Proof. For i≥1 write, Vi≡Vi,τ,b(x). Recall that by lemma 5 dim*(VN)≤(L−N)/2. For N∈[L] and λ(x)=Σ0≤j≤τ λjxj∈F[x] such that λ0=1, it holds that: λ(x)∈VN iff

  • λ(xb(x)=λ′(x)(mod x N),  (1)
  • This is equivalent to:

  • i linear equation L i≡Σ0≤j≤iλj ·b i−1+(i+1)·λi+1=0 for all 0≤i≤N−1 (we define λj=0 for j>τ).  (2)
  • Note that the i linear equation is independent of N. By lemma 3 above when N∈{L−1} is odd, then

  • λ(xb(x)=λ′(x)(mod x N) implies λ(xb(x)=λ′(x)(mod x N+1).
  • Thus, by the fact above, the formal linear equation LN is linearly dependent on the formal linear equations L1, . . . , LN−1 (seen as a vector of coefficients in Fτ+1) over F. It follows that (1) is equivalent to:

  • L i0≤j≤iλj ·b i−1 +i·λ i+1=0 for all even i∈{0, . . . ,N−1}.  (3)
  • By lemma 5 above VL={σ(x)}, i.e. dim*(VL)=0. Thus when we put in (3) N=L we get that {Li:i∈{0, 2, 4, . . . , L−2}} is an independent set of τ formal linear equations in τ unknowns. Thus for even N∈{L} we get VN is the set of solutions of {Li:i∈{0, 2, . . . , N−2}}. Hence, we reduced the number of independent linear equations by (L−N)/2 and therefore dim(VN)=(L−N)/2.♦
  • Comment. This proof is also an alternative proof to the uniqueness lemma 2 below.♦
  • 1.7 Example Related to the Dimension Equality We had

  • L i:≡Σ0≤j≤iλj ·b i−1+(i+1)·λi+1=0 for all 0≤i≤N−1 (we define λj=0 for j>τ).

  • Therefore

  • L 0:≡λ0 ·b 01 =b 01=0,

  • L 1:≡λ0 ·b 11 ·b 0 =b 11 ·b 0=0.
  • Note that b11·b0=b0 21·b0=b0·(b01), thus L1 is linearly depends on L0.
  • 1.8 Applying the Dimension Equality to the Syndrome Polynomial of BCH
  • Let t≤r≤1 d=2t+1, n>k≥1, where n*−k*=d, and consider an [n*,k*] BCH code, and a transmitted codeword has τ=t+r errors that are located at E={α1, . . . , ατ}⊆F*. Set E′={1/β:β∈E0}. Define for 0≤k≤2τ−1 the syndromes:

  • S k1≤j≤t+rj k+1 for all 0≤k≤2τ−1.
  • The decoder knows the syndromes {Sk}0≤k≤d-2. Define the syndrome polynomial:

  • S(x)=Σ0≤j≤2τ-1 ,S k ·x k,
  • and define the ELP:

  • λ*(x)=Π1≤j≤τ(1−x·α j)∈[x].
  • By lemma 2:

  • λ*(xS(x)=λ*′(x)(mod x ).
  • Thus by lemma 7 the affine space V2τ,τ,S(x) has dimension 0 and,
  • (*1) the affine space V=V2t,τ,s(x) has dimension r.
  • In the following section, this (low) dimension of V plays a role in enabling low complexity. Note that

  • V={λ(x)∈F[x]:λ(xS(x)=λ′(x)(mod x 2t),λ(0)=1 deg(λ(x)≤τ}.
  • The decoder “knows” this space and can find a basis to it.
  • 2. Analysis of the BCH Key Equations II 2.1 Polynomial Divisions for Key Equations Solutions
  • The recurrence order of (λ(x),σ(x))∈F[x]2, denoted by ord(λ, σ), is defined as

  • ord(λ,σ)=max{degλ,1+degσ}.
  • Lemma 8.
  • I. Take even N≥1 λ(x), γ(x),b(x)∈F[x], b(x)=Σ0≤k≤N-1bkxk, and suppose:

  • λ(0)=1  (1)

  • λ(xb(x)=γ(x)(mod x N).  (2)

  • ord(λ,γ)≤N/2,  (3)
  • and (λ(x),γ(x)) is the pair with minimal order for which (1)-(3) holds. It then holds that gcd(λ(x),γ(x))=1. Take now σ(x), ω(x), ∈F[x], be such that the same holds:

  • σ(0)=1  (1)

  • σ(xb(x)=ω(x)(mod x N).  (2)

  • ord(σ,ω)≤N/2.  (3)
  • There then exists c(x)∈F[x] such that c(0)=1, deg(c(x))>1 and σ(x)=λ(x)·c(x) and ω(x)=γ(x)·c(x).
    II. If we add the assumption that:

  • X′(x)=γ(x) and σ′(x)=ω(x),  (4)
  • it then holds there exists u(x)∈F[x], such that u(0)=1 and c(x)=u(x)2. [II. follows also from I. and lemma 10 below].
    III. It follows the that the other direction of I is also true: if λ(x), γ(x)∈F[x] satisfy (1)-(3) and gcd(λ(x),γ(x))=1 then (λ(x),γ(x)) is the pair with minimal order for which (1)-(3) holds.
  • Proof.
  • I. If there was g(x)∈F[x] such that g(x)|λ(x) and g(x)|γ(x) and deg(g(x))>0 then g(0)≠0 and hence we would have g(0)·(λ(x)/g(x))·b(x), g(0)·(γ(x)/g(x))(mod xN) and contradiction to the minimality of λ(x). Thus gcd(λ(x),γ(x))=1.
    Next, it holds that b(x)=γ(x)/λ(x) (mod xN) and b(x)=ω(x)/σ(x) (mod xN). Therefore:

  • γ(x)/λ(x)=ω(x)/τ(x)(mod x N),
  • implying:

  • γ(x)·σ(x)=ω(x)·λ(x)(mod x N),
  • and therefore by (3):

  • γ(x)·σ(x)=ω(x)·λ(x).
  • Since (λ(x),γ(x))=1 it follows that λ(x)|σ(x). Let c(x)=λ(x)/σ(x), it then holds that c(0)=1 and:

  • γ(x)·λ(xc(x)=ω(x)·λ(x) that is: γ(xc(x)=ω(x). ♦
  • II. Here we assume X′(x)=γ(x) and σ′(x)=ω(x). Since σ(x)=λ(x)·c(x) then σ′(x)=λ′(x)·c(x)+λ(x)·c′(x) thus ω(x)=γ(x)·c(x)+λ(x)·c′(x) implying that

  • λ(xc′(x)=0, that is c′(x)=0.
  • Claim: for p(x)∈F[x], if p′(x)=0 then p(x)=q(x)2 for some q(x)∈F[x].
  • Proof: put

  • p(x)=Σ0≤i≤n a i ·x i then p′(x)=Σ1≤i≤n, i odd αi ·x′ i-1.
  • It follows from p′(x)=0 that:

  • p(x)=Σ0≤i≤n,i even αi ·x i,

  • thus:

  • p(x)=(Σ0≤i≤n,i even(a i)1/2 ·x i/2)2
  • 2.2 Polynomial Divisions for Key Equations Solutions—BCH Generalization
  • Lemma 9. Take N≥1 σ(x), λ(x)∈F[x], σ(0)=λ(0)=1 and b(x)=Σ0≤k≤N-1 bkxk∈F[x]\{0} and suppose:

  • λ(xb(x)=λ′(x)(mod x N) and σ(xb(x)=σ′(x)(mod x N)  (1)

  • N≥deg(λ(x))+deg(σ(x))  (2)

  • σ(x)|λ(x)  (3)
  • Then there exists ω(x)∈F[x], such that ω(0)=1 and λ(x)=ω(x)2·σ(x).
    Proof. Let K be an extension field of F that contains all λ(x) roots and all σ(x) roots. Represent λ(x) and σ(x) by:

  • λ(x)=Π1≤j≤s(1−x·α j)r(j) and σ(x)=Π1≤j≤s′(1−x·α′ j)r′(j),  (4)
  • where α1, . . . , αs∈K* are mutually different and r(j)≥1. Likewise α′1, . . . , α′s′∈K* are mutually different and r′(j)≥1. Define A to be the symmetric difference of {αj:1≤j≤s, r(j) is odd} and {α′j:1≤j≤s′, r′(j) is odd}. It follows from lemma 2 that for 0≤k≤N−1:

  • Σ1≤j≤s,r(j) is odd αj k+1 =b k1≤j≤s′,r′(j) is odd α′j k+1.
  • That is,

  • 0=Σ1≤j≤s,r(j) is odd αj k+11≤j≤s′,r′(j) is odd α′j k+1β∈Aβk+1.
  • If A≠Ø we get a contradiction since this yields linear dependency of the columns of a N×|A| Vandermonde matrix where |A|≤s+s′≤N. Thus A=Ø and hence s=s′ and:

  • j:1≤j≤s, r(j) is odd}={α′j:1≤j≤s′, r′(j) is odd}.

  • Define

  • f(x)=Π1≤j≤s,r(j) is odd(1−x·α j).
  • By the above, there are polynomials g(x) and h(x) in F[x] such that g(0)=h(0)=1 and:

  • λ(x)=(g(x))2 ·f(x) and σ(x)=(h(x))2 ·f(x).  (5)
  • Since σ(x)|λ(x) then h(x)|g(x). Define ω(x)=g(x)/h(x) then ω(0)=1 and ω(x)2·σ(x)=λ(x).♦
  • 2.3 Continuation Principle for Reed-Solomon (RS)
  • Lemma 10. Take N≥1 λ(x), γ(x), b(x)∈F[x], λ(0)=1, b(x)=Σ0≤k≤N-1bkxk, λ(x)=E0≤k≤τ λkxk and suppose:

  • λ(xb(x)=γ(x)(mod x N).  (1)

  • deg(γ(x))<τ<N.  (2)
  • It then holds for every L>N that there exists unique {bk:N<k≤L}⊆F such that for

  • B(x)=τ0≤k≤L-1 b k x k:  (3)

  • λ(xB(x)=γ(x)(mod x L).  (4)
  • Proof. Define for k=N:(L−1) define, inductively, in increasing order:

  • b k1≤j≤τλj ·b k−j.  (5)
  • Since λ0=1 it is equivalent to

  • 0=τ0≤j≤τλj ·b k−j.  (6)
  • This with (1) is equivalent to (4). The uniqueness follows by induction since (6) implies (5).♦
  • 2.4 Continuation Principle for BCH
  • Lemma 11. Take L>N≥1 λ(x)∈F[x], λ(0)=1 and b(x)=Σ0≤k≤N-1bkxk∈F[x] and suppose that:

  • λ(xb(x)=λ′(x)(mod x N) and deg(λ(x))<N.  (1)
  • There then exists {bk:N≤k<L}⊆F such that

  • for odd 0<k<L it holds that b k =b 2 (k-1)/2,  (2)
  • and for B(x)=Σ0≤k≤L-1bkxk:

  • λ(xB(x)=λ′(x)(mod x L).  (3)
  • Note that by lemma 9 these {bk:N<k≤L} are unique.
    Proof. Let K be an extension field of F that contains all λ(x) roots. Represent λ(x) by: λ(x)=Π1≤j≤s(1−x·αj)r(j) where α1, . . . , αs∈K are mutually different and r(j)≥1. By lemma 2 it follows from λ(x)·b(x)=λ′(x)(mod xN) that:

  • b k1≤j≤s,r(j) is odd αj k+1 for all 0≤k≤N−1.
  • Define now:

  • b k1≤j≤s,r(j) is odd αj k+1 for all N≤k≤L−1.
  • Then (2) follows and by the other direction of lemma 2 that (3) holds for B(x)=Σ0≤k≤L-1bkxk
  • 2.5 BCH Probability Bound for Key Equations Solutions 1 (PB1)
  • Lemma 12. Take t>s≥1, and randomly sample an odd-square b(x)=E0≤k<2t bkxk∈F[x], with uniform distribution.
    I. The probability that there exists separable λ(x)∈F[x] such that:

  • λ(xb(x)=λ′(x)(mod x 2t) and λ(0)=1 and deg(λ(x))=t−s, is upper bounded by q −s.  (1)
  • II. The probability that there exists any polynomial λ(x)∈F[x] such that (1) holds is upper bounded by q−s/(1−1/q2)
  • Proof.
  • I. Recall that the set of odd-square polynomials of degree<2t, is:

  • V={b(x)=Σ0≤k<2t b k x k ∈F[x]: for all 0≤k<t−1:b k 2 =b 2k+1}.
  • Define now:
  • W={λ(x)∈F[x]:λ(x) is separable, λ(0)=1, and deg(λ(x))=t−s}.
  • Note that when b(x)∈V and λ(x)∈W satisfies

  • λ(xb(x)=λ′(x)(mod x 2t),
  • it also satisfies:

  • λ(xb(x)=λ′(x)(mod x 2t-2s) and λ(0)=1 and deg(λ(x))=t−s.  (2)
  • For λ(x)∈W and 1≤j≤t, define:

  • U λ(x),j ={b(x)∈V:λ(xb(x)=λ′(x)(mod x 2j)}.
  • By lemma 11 and its proof, Uλ(x),t contains exactly one polynomial and by (2) this polynomial is also in Uλ(x),t-s. On the other hand, it is clear from the definition and from lemma 10 and its proof that, for b(x)=Σ0≤k<2t bkxk∈Uλ(x),t-s it holds that A={bk:0≤k<2(t−s)} are uniquely determined by the key equations and B={bk:2(t−s)≤k<2t, k is even} can be chosen freely from F and C={bk:2(t−s)≤k<2t, k is odd} are uniquely determined by A and B through the equation bk 2=b2k+1 (for all 0≤k<t−1). It follows that:

  • |U λ(x),t-s |=q s.
  • Next note that by lemma 11 and its proof for λ1(x) and λ2(x)∈W such that λ1(x)≠λ2(x) it holds that

  • U λ 1 (x),t-s∩λλ 2 (x),t-s=Ø.
  • Now, randomly sample b(x) from V with uniform distribution and let R be the event that b(x) is in:

  • U≡∪ λ(x)∈W U λ(x),t-s.
  • Then for some λ(x)∈W it holds that, b(x) is an (random) element of Uλ(x),t-s. Hence by the above the probability that b(x) is in Uλ(x),t is exactly q−s. It follow that the probability that there exists separable λ(x)∈F[x] such that (1) holds is:

  • Pr(Rq −s,
  • which proves I.
    II. It follows from UL1 above (see section 1.4) that that if λ(x)∈F[x] satisfies (1) above, then there are unique polynomial λ1(x),u(x)∈F[x], such that:

  • λ1(xu 2(x)=λ(x) and λ1(0)=u(0)=1 and λ1(x) is separable,  (a1)

  • and

  • λ1(xb(x)=λ1′(x)(mod x 2t).  (a2)
  • Note that u(x) can also be 1. Let j=deg(u(x)), then and deg(λ1(x))=t−s−2j. It was proved above that the probability that when we sample b(x) randomly from V, (a2) will be satisfied, is upper bounded by q−s-2j. Thus the probability that (1) is satisfied is upper bounded by:

  • q −s·(1+q −2 +q −4+ . . . )=q −s/(1−1/q 2)♦
  • 2.6 General Polynomial Division Principles Related to RS and BCH
  • Interpolation. For γ1, . . . , γN, distinct elements of F*, and for every p(x)∈F[x] with deg(p(x))<N there exists unique coefficients a1, . . . , aN∈F such that

  • p(x)=Σj∈[N] a j·
    Figure US20230223958A1-20230713-P00002
    (1−x·γ i).
  • Proof. For j∈[N] define pj(x)=
    Figure US20230223958A1-20230713-P00002
    (1−x·γi). It is sufficient to prove that {pj(x)}j∈[N] are linearly independent. Take a1, . . . , aN∈F and define

  • p(x)=Σj∈[N] a j ·p j(x),
  • it then holds for j∈[N] that

  • p(1/γi)=a j·
    Figure US20230223958A1-20230713-P00002
    (1−γij)
  • Thus if p(x)=0 then aj=0 for all j∈[N].♦
    Lemma 13. Take N≥1, and any polynomials λ(x),σ(x)∈F[x] (of any degrees) such that λ(0)=1. There then exists a unique polynomial

  • b(x)=Σ0≤k<N b k x k ∈F[x]∈F[x] such that

  • λ(xb(x)=σ(x)(mod x N).  (1)
  • Proof. We represent λ(x)=1+x·λ1(x), where λ1(x)∈F[x]. (1) implies that:

  • b(x)=σ(x)/(1+x·λ 1(x))(mod x N)=σ(x)·(Σ0≤i≤N-1(x·λ 1(x))i)(mod x N).♦
  • Lemma 14. Take any M,N≥1, and λ(x),σ(x)∈F[x] such that λ(x) is separable and λ(0)=1 and M=deg(λ(x))>deg(σ(x)) and let

  • b(x)=Σ0≤k<N b k x k ∈F[x]
  • be the unique polynomial (see lemma 13) such that:

  • λ(xb(x)=σ(x)(mod x N).  (1)
  • Let K be an extension field of F that contains all λ(x) roots, we can represent λ(x) by uniquely:

  • λ(x)=Π1≤j≤M(1−x·α j),
  • where α1, . . . , αt∈K* are distinct scalars.
    There exists a1, . . . , aM∈F such that

  • b k=∈1≤j≤M a j·αj k for all 0≤k<N.  (2)
  • a1, . . . , aM are unique when M≤N/2.
    Proof. By the claim above there exists unique a1, . . . , aM∈F such that

  • σ(x)=Σj∈[M] a j·
    Figure US20230223958A1-20230713-P00002
    (1−x·α i)
  • It follows from (1) that:

  • b(x)=Σj∈[M] a j/(1−αj ·x)(mod x N)

  • j∈[M] a j·Σ0≤i≤N-1j ·x)i

  • 0≤i≤N-1Σj∈[M] a j·(αj ·x)i

  • 0≤i≤N-1 x i·Σj∈[M] a j·αj i.
  • This proves (2). The uniqueness, when M≤N/2, follows from the same Vandermonde independency argument as for the BCH. ♦
  • 3. Analysis of the Key Equations III 3.1 The Uniqueness and Expansion Lemmas
  • For N, τ−1 and b(x)∈F[x] we defined:

  • V N,τ,b(x)≡{λ(x)∈F[x]:λ(xb(x)=λ′(x)(mod x N),deg(λ(x))≤τ,λ(0)=1}.
  • Note that for all λ(x)∈VN,τ,b(x) the roots of λ(x) are nonzero. The following lemma eliminates certain singularities in our solution. It implies that if the ELP in V then any polynomial in V that has r roots in W in common with the ELP is in fact that ELP.
  • Lemma 15 (Uniqueness Lemma 2 (UL2)). Let t≥1, r≥1 and b(x)∈F[x] is odd-square, b(x)=Σ0≤k<Lbkxk and suppose that λ(x),σ(x)∈=V2t,t+r,b(x) wherein λ(x) is separable. Suppose also that for some D⊆F*, |D|=r, for every δ∈D that 2(β−1)=σ(β−1)=0. It then holds that σ(x)=λ(x).
    Proof. Let K be an extension field of F that contains all λ(x) roots and all σ(x) roots. We can represent λ(x) and σ(x) by:

  • λ(x)=Π1≤j≤t+r(1−x·α j)

  • σ(x)=Π1≤j≤t′+r(1−x·β j)r(j)
  • Where 0≤t′≤t, r(j)≥1 and α1, . . . , αt+r∈K* are mutually different and β1, . . . , βt′+r∈K* are mutually different. Note that D⊆{α1, . . . , αt+r} and D⊆{β1, . . . , βt′+r}. Thus we can assume without loss of generality that αii∈D for i∈[r]. Let B={i∈[r]:rj is even} and b=|B|. Note that t′≤t−b.
  • By lemma 2 for all 0≤k≤2t−1:

  • Σ1≤j≤t+rαj k+1 =b k1≤j≤t′+r,r(j) is odd βj k+1.
  • Thus for every 0≤k≤2t−1:

  • Σ1≤j≤t+rαj k+11≤j≤t′+r,r(j) is odd βj k+1=0,
  • that is,

  • Σ1≤j≤r,r(j) is even αj k+1r+1≤j≤t+rαj k+1r+1≤j≤t′+r,r(j) is odd βj k+1=0.
  • Let A1={αj: j∈B}, A2={αj: r+1≤j≤t+r}, A3={βj: r+1≤j≤t′+r, r(j) is odd}. It then holds that |A1|=b and |A2|=t and |A3|=t′≤t−b.
  • Thus

  • |A 1 |+|A 2 |+|A 3 |≤b+t+(t−b)≤2t.
  • Note that

  • A 1 ∩A 2 =A 1 ∩A 3=Ø,
  • and define

  • C=A 1 ∪A 2 ∪A 3 \A 2 ∩A 3.
  • Then |C|≤2t and by the above for every 0≤k≤2t−1:

  • Σγ∈Cγk+1=0.
  • If C is not the empty set we get a contradiction since this yields linear dependency of the columns of a (2t)×|C| Vandermonde matrix where |C|≤2t. Thus C=Ø and hence A1=Ø and A2∪A3=A2∩A3, that is A2=A3. It follows that λ(x)=σ(x).♦
  • Recall that the transformation x→x2 is 1-1 linear transformation from F to F over F2.
  • Lemma 16 (Expansion Lemma). Let t≥1, r<s≥1 and b(x)∈F[x] is odd-square, b(x)=E0≤k<Lbkxk and take λ(x)∈V2t,t+r,b(x) with deg(λ(x))=t+s. It then holds for every p(x)∈F(x) such that p(0)=1 deg(p(x))≤(r−s)/2 and f(x)=p2(x) that f(x)·λ(x)∈V2t,t+r,b(x).
    Proof. Note that f′(x)=0 and hence for all g(x)∈F[x] (f(x)·g(x))′=f(x)·g′(x), thus since

  • λ(xb(x)=λ′(x)(mod x N)
  • Then

  • f(x)·λ(xb(x)=f(x)·λ′(x)(mod x N)=(f(x)·λ(x))′(mod x N).
  • In addition deg(f(x)·λ(x))≤t+r, and (f·λ)(1)=1. Thus f(x)·λ(x)∈V2t,t+r,b(x).
  • 3.2 The Dimension Bound 3 (DB3)
  • Lemma 17. Let N, τ≥1, b(x)=b(x)=Σ0≤k<Nbkxk∈F[x] is odd-square, then, if VN,τ,b(x)≠Ø:

  • Δ≡dim*(V N,τ+1,b(x))−dim*(V N,τ,b(x))≤1.
  • Proof. Note that the case τ≥N−1 is trivial: if we add to any basis of VN,τ,b(x), the polynomial λ(x)=xt+1 we get a basis of VN,τ+1,b(x), and hence in this case Δ=1. Assume henceforth that τ<N−1. A polynomial λ(x)=Σ0≤i≤Σλixi∈F[x] is in VN,τ,b(x) iff, λ0=1 and

  • Σ0≤i≤kλi ·b k−i+(k+1)λk+1=0 for all 0≤k<N (we define γi=0 for i>τ).
  • Likewise a polynomial λ(x)=Σ0≤i≤τ+1λixi∈F[x] is in VN,τ+1,b(x) iff λ0=1 and

  • Σ0≤i≤kλi ·b k−i+(k+1)·λk+1=0 for all 0≤k<N.
  • Let δi,k be the GF(2) Kronecker delta, i.e., for integers i,k: δi,k=0GF(2) if i=j and δi,k=1GF(2) if i≠j. Consider the following N row vectors in FN+1:

  • v 0 =[b 0,1,0, . . . ,0]

  • v 1 =[b 1 ,b 0,0, . . . ,0]

  • v 2 =[b 2 ,b 1 ,b 1,1, . . . ,0]

  • v 3 =[b 3 ,b 2 ,b 1 ,b 0,0, . . . ,0]

  • v 4 =[b 4 ,b 3 ,b 2 ,b 1 ,b 0,1,0, . . . ,0]

  • v 5 =[b 5 ,b 4 ,b 3 ,b 2 ,b 1 ,b 0,0, . . . ,0]

  • v 6 =[b 6 ,b 5 ,b43 ,b 3 ,b 2 ,b 1 ,b 0,1,0, . . . ,0]

  • v N−1 =[b N−1 ,b N−2 ,b N−3 , . . . ,b 2 ,b 1 ,b 0],
  • and let A be the N×N matrix whose rows are v0, . . . , vN−1 respectively. It then holds that a polynomial λ(x)=1+Σ1≤i≤τλixi∈F[x] is in VN,τ,b(x) iff

  • A·[1,λ1, . . . ,λτ,0, . . . ,0]=0,
  • and a polynomial

  • λ(x)=1+Σ1≤i≤τ+1λi x i ∈F[x] is in V N,τ+1,b(x) iff

  • A·[1,λ1, . . . ,λtt+1,0, . . . ,0]=0.
  • It follows that dim*(VN,τ+1,b(x))−dim*(VN,τ,b(x))≤1♦
    As a corollary we get:
  • Lemma 18 (Dimension Bound 3)
  • Let τ≥1, s≥1 b(x)∈F[x] is odd-square, b(x)=Σ0≤k<Nbkxk. Then, if VL,τ,b(x)≠Ø:

  • dim*(V N,τ+s,b(x))−dim*(V N,τ,b(x))≤s.
  • 3.3 Dimension Bound 4 (DB4) on a Midway Degree ELP
  • Lemma 19 (Dimension Bound 4). Take t≥r≥r′<r″≥0 and odd-square b(x)∈F[x] and suppose that
  • (*) there exists λ(x)∈V2t+2r′,t+r′,b(x) that is separable of degree t+r′.
  • It then holds that:

  • dim*(V 2t,t+r′,b(x))=r′ and dim*(V 2t,t+r,b(x))≤r.  I.

  • Define r*=max{r 1 :r 1 ≤r and dim(V 2t,t+r(1),b(x))=r 1}. Then r′≤r*.  II.

  • dim*(V 2t,t+r″,b(x))≥r″  III.
  • Proof.
  • I. By the dimension equality:

  • dim*(V 2t,t+r′,b(x))=r′,
  • and by DB3

  • dim*(V 2t,t+r,b(x))−dim*(V 2t,t+r′,b(x))≤r−r′.
  • It follows that:

  • dim*(V 2t,t+r,b(x))≤r.
  • II. Follows from the proof of I.
    III. and by DB3 dim*(V2t,t+r′,b (x))−dim*(V2t,t+r″,b(x))≤r′−r″, therefore dim*(V2t,t+r″,b(x))≥r″.♦
  • 4. Polynomial Degree Reduction Lemmas, and Probabilistic Bound 4.1 Reducing the Key Equations by One Degree
  • Lemma 20. Take b(x)=Σ0≤k<N-1bkxk∈F[x] and λ(x)∈F[X] with λ(0)=1, and suppose that

  • λ(xb(x)=λ′(x)(mod x N),  (1)
  • and that α∈F* is an inverse of a root of λ(x), i.e., (1−α·x)|λ(x). Define

  • λ*(x)=λ(x)/(1−α·x) and b*(x)=Σ0≤k<N-1(b kk+1x j.
  • It then holds that:

  • λ*(xb*(x)=λ*′(x)(mod x N).  (2)
  • Proof. Note that

  • b(x)+α/(1−αx)(mod x N)

  • =b(x)+Σ0≤k<∞□αk+1 ·x k =b*(x)(mod x N)
  • Thus by (1): λ(x)·b(x)*=(1−α·x)·λ*(x)·(b(x)+α/(1−αx)) (mod xN)

  • =((1−α·x)·λ*(x))′+α·λ*(x)(mod x N)

  • =((1−α·x)·λ*(x)′+α·λ*(x))+α·λ*(x)(mod x N)=(1−α·x)·λ*(x)′(mod x N).
  • Therefore, dividing by (1−αx):

  • λ*(x)·(b(x)+α/(1−αx))=λ*(x)′(mod x N),
  • which proves (2).
  • 4.2 Reducing the Key Equation by any Number of Degrees
  • As a corollary to lemma 20 we get that:
    Lemma 21. Take s≥1, and b(x)=Σ0≤k<N-1bkxk∈F[x] and λ(x)∈F[x] with λ(0)=1, and suppose that

  • λ(xb(x)=λ′(x)(mod x N),  (1)
  • and that α1, . . . , αs∈F* are mutually different inverses of roots of λ(x), i.e., (1−αi·x)|λ(x), for i∈[s] & αi≠αj for i,j∈[s] i≠j. Define

  • λ*(x)=λ(x)/(Πi∈[s](1−αi ·x)) and b*(x)=Σ0≤k<N-1(b ki∈[s]αi k+1x j.
  • It then holds that:

  • λ*(xb*(x)=λ*′(x)(mod x N).♦  (2)
  • 4.3 BCH Probability Bound for Key Equations Solutions 2 (PB2)
  • Introduction. Next we arrive at a probabilistic observation. The following event A is a prototype of an event in the main soft decoding algorithm, wherein a solution to the key equation turns out to be a false ELP candidate, and hence requires some additional complexity. It will be shown that this event has probability close to q−1 in first version and close to q−2 in a second version. In the second version there are an insignificant number of false candidates and consequently insignificant added complexity due to a false alarm that requires a Chien search.
  • Lemma 22. Take t≥r≥1, s>1, and b(x)=Σ0≤k<t bkxk∈F[x]. Fix mutually different α1, . . . , αr+s∈F*. It holds that the probability of the following event, A, is upper bounded by q−s/(1−q−2).
    The event A: There exists λ(x)∈F[x] with λ(0)=1, and deg(λ(x))=t+r such that:

  • λ(xb(x)=λ′(x)(mod x 2t), and  (1)

  • (1−αi ·x)|λ(x), for i∈[r+s] & αi≠αj for i,j∈[r+s]i≠j.  (2)
  • Proof. Define

  • λ*(x)=λ(x)/(Πi∈[r+s](1−αi ·x)), and b*(x)=Σ0≤k<N-1(b ki∈[r+s]αi k+1x j.
  • By lemma 21 it holds that:

  • λ*(xb*(x)=λ*′(x)(mod x 2t) and λ*(0)=1.  (3)
  • Note also that deg(λ*(x))=t−s. It follows from PB1 above that the probability of this event is upper bounded by q−s/(1−q−2).
  • 5. Minimal Monotone Basis of Affine Space of Polynomials and Dimensional Setup 5.1 Minimal Monotone Basis
  • A series of polynomials {pi(x)}1≤i≤s is called monotone if deg(pi(x))<deg(pi+1(x)) for i∈[s−1]. For an s-dimensional subspace U⊆F[x], A={pi(x)}1≤i≤s⊆F[x] is called monotone basis if A is monotone and also a basis of U. Note while there can be many monotone bases to U, the sequence {deg(pi(x))}1≤i≤s is unique for the given U, and is independent of the monotone basis we choose. A={pi(x)}1≤i≤s is called canonic basis of U if every polynomial in A is monic and if for all i∈[s], the coefficient of xj for j=deg(pi(x)) is zero for all pa(x), where a∈[s], a≠i. By [GU] below, the canonic basis is unique. Take p*(x)∈F[x]\U, and define the affine space W=U+p*(x). B={pi(x)}1≤i≤s+1⊆F[x] is called monotone basis of W if {pi(x)}1≤i≤s is a monotone basis of U and ps+1(x)∈F[x]\U. B is called minimal monotone basis of W if B is monotone and deg(ps+1(x)) is minimal among all such bases. Note that when B={pi(x)}1≤i≤s+1⊆F[x] is a minimal monotone basis of W, then deg(ps+1(x)) is not in {deg(pi(x))}1≤i≤s, and therefore deg(ps+1(x))=min{deg(p(x)):p(x)∈W}≡μ. On the other hand if p(x)∈U and deg(p(x))=μ and {pi(x)}1≤i≤s is any monotone basis of U then for ps+1(x)=p(x), it holds that {pi(x)}1≤i≤s+1 is a minimal monotone basis of W.
  • 5.2 Main Dimensional Setup for the Algorithm
  • Take t≥r≥1 and odd-square b(x)∈F[x] and set V≡V2t,t+r,b(x). By the dimension equality, if there exists a separable σ(x)∈V such that deg(σ(x))=t+r, then:

  • (*)dim*(V)=r.
  • In general, given b(x) and r we cannot know in advance if such σ(x) exist, before operating the proceeding algorithm. However, owing to DB4 II (see section 3.3, above), (*) is the only case of interest for the ensuing algorithm. Thus let {λi(x)}1≤i≤r+1⊆F[x] be a minimal monotone basis of V. Note that we can always find a minimal monotone basis to V by solving the associated linear equations, using Gaussian elimination. Let μ=deg(λr+1(x)). As mentioned above

  • μ=min{deg(λ(x)): λ(x)∈V}. In fact V 2t,μ,b(x)={λr+1(x)} and for 1≤j:
  • if j<μ:V2t,j,b(x)=Ø;
  • if j≥μV2t,j,b(x)≠Ø.

Claims (18)

1. A digital electronic circuit tangibly embodying a program of instructions executed by the digital electronic circuit to perform method steps for Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding, comprising the steps of;
receiving a codeword x through a digital electronic communication channel, wherein the received codeword x has τ=t+r errors for some r≥1, wherein t=(d−1)/2 and d is a minimal distance of a BCH code;
computing a minimal monotone basis {λi(x)}1≤i≤r+1⊆F[x] of an affine space V={λ(x)∈F[x]:λ(x)·S(x)=λ′(x)(mod x2t), deg(λ(x)≤t+r}, wherein λ(x) is an error locator polynomial, S(x) is a syndrome, and F[x]=GF(q) wherein q=2m for m>1;
computing a matrix A≡(λji))i∈[w],j∈[r+1], wherein W={β1, . . . , βw} is a set of weak bits in x;
processing for every subset W«⊆W by retrieving from memory a set W″=R(W′), computing BW′ by adding one row to BW″ and performing Gaussian elimination operations on BW′, wherein R(W′) is reliability probabilities of the bits in W′; and
wherein when a first r′ columns of BW′ are a transpose of a systematic matrix and deg(λ(x))=t+r′, wherein 1≤r′≤r, performing:
computing u(x)=gcd(λ(x),λ′(x)), wherein λ′(x) is a derivative of λ(x);
computing λ(Φ\W′) and deducting from it Zλ(x),Φ wherein Zλ(x),Φ={β∈Φ:λ(β)=0}, when u(x) is a scalar in F*;
adding a pair (λ(x), Zλ(x),Φ) to set a L of all (r′, λ(x), Zλ(x),Φ) such that 1≤r′≤r, λ(x)∈V′r′, |Zλ(x),W|≥r′−1, and |Zλ(x),Φ|=t+r′, when |Zλ(x),Φ=t+r′; and
outputting the set L to the digital electronic communication channel.
2. The method of claim 1, wherein the one row added to BW″ is an arbitrary odd-square polynomial in the codeword x.
3. The method of claim 1, further comprising forming the error locating polynomial from coefficients in the set L, and flipping channel hard decisions at error locations found in the received codeword.
4. The method of claim 1, wherein λ(x)∈Vr′ is unique and λ(β)=0 for every β∈W′, when the first r′ columns of BW′ are a transpose of a systematic matrix.
5. The method of claim 1, further comprising terminating the processing of W′ when deg(u(x))≥1.
6. The method of claim 1, further comprising terminating the processing of W′ when the first r′ columns of BW′ are not a transpose of a systematic matrix or deg(λ(x))≠t+r′.
7. The method of claim 1, further comprising, before computing u(x)=gcd(λ(x),λ′(x)), computing, for every r≥ρ≥r′+2 and a pair (W1, λ1(x)) such that λ(x)∈V′ρ and W1⊆W with |W1|=ρ+1, wherein λ1(x)∈Vρ is a unique polynomial such that λ1(W1)=0, λ1′(β) for every β in W1.
8. The method of claim 5, further comprising terminating the processing of W1 when for any β in W1, λ1′(β)=0.
9. A non-transitory program storage device readable by a computer, tangibly embodying a program of instructions executed by the computer to perform method steps for a Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding, comprising the steps of:
receiving a codeword x through a digital electronic communication channel, wherein the received codeword x has τ=t+r errors for some r≥1, wherein t=(d−1)/2 and d is a minimal distance of a BCH code;
performing error correction on the codeword to generate a corrected codeword; and
outputting data included in the corrected codeword to the digital electronic communication channel,
wherein performing the error correction comprises
computing a minimal monotone basis {λi(x)}1≤i≤r+1⊆F[x] of an affine space V={λ(x)∈F[x]:λ(x)·S(x)=λ′(x)(mod x2t), λ(0)=1, deg(λ(x)≤t+r}, wherein λ(x) is an error locator polynomial, S(x) is a syndrome, and F[x]=GF(q) wherein q=2m for m>1;
computing a matrix A≡(λji))i∈[w],j∈[r+1], wherein W={β1, . . . , βw} is a set of weak bits in x;
constructing a submatrix of r+1 rows from sub matrices of r+1 rows of the subsets of A such that the last column is a linear combination of the other columns;
forming a candidate error locating polynomial using coefficients of the minimal monotone basis that result from the constructed submatrix;
performing a fast Chien search wherein the candidate error locating polynomial is verified; and
flipping channel hard decision at error locations found in the candidate error locating polynomial and returning the decoded codeword x.
10. The computer-readable program storage device of claim 9, wherein constructing a submatrix of r+1 rows from sub matrices of r+1 rows of the subsets of A such that the last column is a linear combination of the other columns comprises:
processing for every subset W′⊆W by retrieving from memory a set W″=R(W′), computing BW′ by adding one row to BW″ and performing Gaussian elimination operations on BW′, wherein R(W′) is reliability probabilities of the bits in W′;
wherein when a first r′ columns of BW′ are a transpose of a systematic matrix and deg(λ(x))=t+r′, wherein 1≤r′≤r, performing:
computing u(x)=gcd(λ(x), λ′(x)), wherein λ′(x) is a derivative of λ(x);
computing λ(Φ\W′) and deducting from it Zλ(x),Φ wherein Zλ(x),Φ={β∈Φ: λ(β)=−0}, when u(x) is a scalar in F*;
adding a pair (λ(x), Zλ(x),Φ) to set a L of all (r′, λ(x), Zλ(x),Φ) such that 1≤r′≤r, λ(x)∈V′r′, |Zλ(x)m/w|≥r′+1, and |Zλ(x),Φ|=t+r′, when |Zλ(x),Φ|=t+r′; and
outputting the set L.
11. The computer-readable program storage device of claim 10, wherein the one row added to BW″ is an arbitrary odd-square polynomial in the codeword x.
12. The computer-readable program storage device of claim 10, wherein λ(x)∈Vr′ is unique and λ(β)=0 for every β∈W′, when the first r′ columns of BW′ are a transpose of a systematic matrix.
13. The computer-readable program storage device of claim 10, the method further comprising terminating the processing of W′ when deg(u(x))≥1.
14. The computer-readable program storage device of claim 10, the method further comprising terminating the processing of W′ when the first r′ columns of BW′ are not a transpose of a systematic matrix or deg(λ(x))≠t+r′.
15. The computer-readable program storage device of claim 10, the method further comprising, before computing u(x)=gcd(λ(x),λ′(x)), computing, for every r≥ρ≥r′+2 and a pair (W1, λ1(x)) such that λ(x)∈V′ρ and W1⊆W with |W1|=ρ+1, wherein λ1(x)∈Vρ is a unique polynomial such that λ1(W1)=0, λ1′(β) for every β in W1.
16. The computer-readable program storage device of claim 15, the method further comprising terminating the processing of W1 when for any β in W1, λ1′(β)=0.
17. A computer memory-based product, comprising:
a memory; and
a digital circuit tangibly embodying a program of instructions executed by the computer to perform a method for a Bose-Chaudhuri-Hocquenghem (BCH) soft error decoding, wherein the method comprises the steps of:
receiving a codeword x through a digital electronic communication channel, wherein the received codeword x has τ=t+r errors for some r≥1, wherein t=(d−1)/2 and d is a minimal distance of a BCH code;
performing error correction on the codeword to generate a corrected codeword; and
outputting data included in the corrected codeword to the digital electronic communication channel,
wherein performing the error correction comprises
computing a minimal monotone basis {λj(x)}1≤i≤r+1⊆F[x] of an affine space V={λ(x)∈F[x]:λ(x)·S(x)=λ′(x)(mod x2t), λ(0)=1, deg(λ(x)≤t+r}, wherein λ(x) is an error locator polynomial, S(x) is a syndrome, and F[x]=GF(q) wherein q=2m for m>1;
computing a matrix A≡(λji))i∈[w],j∈[r+1], wherein W={β1, . . . , βw} is a set of weak bits in x;
processing for every subset W′⊆W by retrieving from memory a set W″=R(W′), computing BW′ by adding one row to BW″ and performing Gaussian elimination operations on BW′, wherein R(W′) is reliability probabilities of the bits in W′;
wherein when a first r′ columns of BW′ are a transpose of a systematic matrix and deg(λ(x))=t+r′, wherein 1≤r′≤r, performing:
computing u(x)=gcd(λ(x), λ′(x)), wherein λ′(x) is a derivative of λ(x);
computing λ(Φ\W′) and deducting from it Zλ(x),Φ wherein Zλ(x),Φ={β∈Φ: λ(β)=0}, when u(x) is a scalar in F*;
adding a pair (λ(x), Zλ(x),Φ) to set a L of all (r′, λ(x), Zλ(x),Φ) such that 1≤r′≤r, λ(x)∈V′r′, |Zλ(x),W|≥r′+1, and |Zλ(x),Φ|=t+r′, when |Zλ(x),Φ|=t+r′; and
outputting the set L.
18. The computer memory-based product of claim 17, wherein the memory is at least one of a solid-state drive, a universal flash storage, or a DRAM.
US17/647,441 2022-01-07 2022-01-07 BCH fast soft decoding beyond the (d-1)/2 bound Active US11689221B1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/647,441 US11689221B1 (en) 2022-01-07 2022-01-07 BCH fast soft decoding beyond the (d-1)/2 bound
DE102022118166.9A DE102022118166A1 (en) 2022-01-07 2022-07-20 BCH fast soft decoding beyond the (D-1)/2 limit
KR1020220127268A KR20230107104A (en) 2022-01-07 2022-10-05 Bch fast soft decoding beyond the (d-1)/2 bound
CN202211410889.1A CN116418352A (en) 2022-01-07 2022-11-11 Method for BCH soft decoding and apparatus for performing the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/647,441 US11689221B1 (en) 2022-01-07 2022-01-07 BCH fast soft decoding beyond the (d-1)/2 bound

Publications (2)

Publication Number Publication Date
US11689221B1 US11689221B1 (en) 2023-06-27
US20230223958A1 true US20230223958A1 (en) 2023-07-13

Family

ID=86895703

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/647,441 Active US11689221B1 (en) 2022-01-07 2022-01-07 BCH fast soft decoding beyond the (d-1)/2 bound

Country Status (4)

Country Link
US (1) US11689221B1 (en)
KR (1) KR20230107104A (en)
CN (1) CN116418352A (en)
DE (1) DE102022118166A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8171368B1 (en) * 2007-02-16 2012-05-01 Link—A—Media Devices Corporation Probabilistic transition rule for two-level decoding of reed-solomon codes
US8381082B1 (en) * 2007-02-27 2013-02-19 Marvell International, Inc. Power-saving area-efficient hybrid BCH coding system
US8674860B2 (en) * 2012-07-12 2014-03-18 Lsi Corporation Combined wu and chase decoding of cyclic codes
US9619327B2 (en) * 2015-06-30 2017-04-11 SK Hynix Inc. Flash memory system and operating method thereof
US10218388B2 (en) * 2015-12-18 2019-02-26 SK Hynix Inc. Techniques for low complexity soft decoder for turbo product codes
US10439644B2 (en) * 2015-07-14 2019-10-08 Western Digital Technologies, Inc. Error locator polynomial decoder and method
US10439643B2 (en) * 2016-07-28 2019-10-08 Indian Institute Of Science Reed-Solomon decoders and decoding methods
US10461777B2 (en) * 2015-07-14 2019-10-29 Western Digital Technologies, Inc. Error locator polynomial decoder and method
US10523245B2 (en) * 2016-03-23 2019-12-31 SK Hynix Inc. Soft decoder for generalized product codes
US10756763B2 (en) * 2018-09-28 2020-08-25 Innogrit Technologies Co., Ltd. Systems and methods for decoding bose-chaudhuri-hocquenghem encoded codewords

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8171368B1 (en) * 2007-02-16 2012-05-01 Link—A—Media Devices Corporation Probabilistic transition rule for two-level decoding of reed-solomon codes
US8381082B1 (en) * 2007-02-27 2013-02-19 Marvell International, Inc. Power-saving area-efficient hybrid BCH coding system
US8674860B2 (en) * 2012-07-12 2014-03-18 Lsi Corporation Combined wu and chase decoding of cyclic codes
US9619327B2 (en) * 2015-06-30 2017-04-11 SK Hynix Inc. Flash memory system and operating method thereof
US10439644B2 (en) * 2015-07-14 2019-10-08 Western Digital Technologies, Inc. Error locator polynomial decoder and method
US10461777B2 (en) * 2015-07-14 2019-10-29 Western Digital Technologies, Inc. Error locator polynomial decoder and method
US10218388B2 (en) * 2015-12-18 2019-02-26 SK Hynix Inc. Techniques for low complexity soft decoder for turbo product codes
US10523245B2 (en) * 2016-03-23 2019-12-31 SK Hynix Inc. Soft decoder for generalized product codes
US10439643B2 (en) * 2016-07-28 2019-10-08 Indian Institute Of Science Reed-Solomon decoders and decoding methods
US10756763B2 (en) * 2018-09-28 2020-08-25 Innogrit Technologies Co., Ltd. Systems and methods for decoding bose-chaudhuri-hocquenghem encoded codewords

Also Published As

Publication number Publication date
DE102022118166A1 (en) 2023-07-13
US11689221B1 (en) 2023-06-27
CN116418352A (en) 2023-07-11
KR20230107104A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
US9998148B2 (en) Techniques for low complexity turbo product code decoding
US7793195B1 (en) Incremental generation of polynomials for decoding reed-solomon codes
US10389385B2 (en) BM-based fast chase decoding of binary BCH codes through degenerate list decoding
US7870468B1 (en) Reed-solomon decoder using a configurable arithmetic processor
US10439643B2 (en) Reed-Solomon decoders and decoding methods
US8335808B2 (en) Method and apparatus for processing multiple decomposed data for calculating key equation polynomials in decoding error correction code
US8132081B1 (en) Binary BCH decoders
US6487691B1 (en) Reed-solomon decoder
Ahmed et al. VLSI architectures for soft-decision decoding of Reed-Solomon codes
US11101925B2 (en) Decomposable forward error correction
Gadouleau et al. Complexity of decoding Gabidulin codes
Lin et al. A cyclic weight algorithm of decoding the (47, 24, 11) quadratic residue code
US9337869B2 (en) Encoding and syndrome computing co-design circuit for BCH code and method for deciding the same
US10367529B2 (en) List decode circuits
US20230223958A1 (en) Bch fast soft decoding beyond the (d-1)/2 bound
US10218386B1 (en) Methods and apparatus for performing variable and breakout Reed Solomon encoding
US20180006664A1 (en) Methods and apparatus for performing reed-solomon encoding by lagrangian polynomial fitting
US9467173B2 (en) Multi-code Chien&#39;s search circuit for BCH codes with various values of m in GF(2m)
KR101636406B1 (en) Preprocessing apparatus and method for low latency of syndrome calculation in bch decoder
US10404407B2 (en) Groebner-bases approach to fast chase decoding of generalized Reed-Solomon codes
Mohammed et al. FPGA implementation of 3 bits BCH error correcting codes
Robert A quadratic Welch-Berlekamp algorithm to decode generalized Gabidulin codes, and some variants
Chen et al. A new inversionless Berlekamp-Massey algorithm with efficient architecture
RU2340091C2 (en) Method of decoding serial cascade code (versions)
Prashanthi et al. An advanced low complexity double error correction of an BCH decoder

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DOR, AVNER;SHANY, YARON;DOUBCHAK, ARIEL;AND OTHERS;SIGNING DATES FROM 20211219 TO 20211221;REEL/FRAME:063578/0803

STCF Information on status: patent grant

Free format text: PATENTED CASE