WO2000007323A1 - Forward error correcting system with encoders configured in parallel and/or series - Google Patents

Forward error correcting system with encoders configured in parallel and/or series Download PDF

Info

Publication number
WO2000007323A1
WO2000007323A1 PCT/US1999/017369 US9917369W WO0007323A1 WO 2000007323 A1 WO2000007323 A1 WO 2000007323A1 US 9917369 W US9917369 W US 9917369W WO 0007323 A1 WO0007323 A1 WO 0007323A1
Authority
WO
WIPO (PCT)
Prior art keywords
shall
code
convolutional
data
wherem
Prior art date
Application number
PCT/US1999/017369
Other languages
French (fr)
Inventor
Juan Alberto Torres
Victor Demjanenko
Frederic Hirzel
Original Assignee
Vocal Technologies, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vocal Technologies, Ltd. filed Critical Vocal Technologies, Ltd.
Priority to EP99938916A priority Critical patent/EP1101313A1/en
Publication of WO2000007323A1 publication Critical patent/WO2000007323A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • H04L1/0047Decoding adapted to other signal detection operation
    • H04L1/005Iterative decoding, including iteration between signal detection and decoding operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0064Concatenated codes
    • H04L1/0065Serial concatenated codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0064Concatenated codes
    • H04L1/0066Parallel concatenated codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0071Use of interleaving
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/26Systems using multi-frequency codes
    • H04L27/2601Multicarrier modulation systems
    • H04L27/2614Peak power aspects
    • H04L27/2615Reduction thereof using coding

Definitions

  • the present invention relates to the use of forward error correction techniques m data transmission over wired and wireless sv stems using an optional Reed-Solomon encoder as an outer encoder and a multiple concatenated convolutional encoder (in se ⁇ al or parallel configuration) as an inner encoder
  • a preferred embodiment of the invention pertains particularly to ADSL svstems. as a representative species of wired-based svstems
  • the invention is based on use of a multiple concatenated convolutional encoder in se ⁇ al or in parallel configuration
  • SMCCC Se ⁇ al Multiple Concatenated Convolutional Code
  • PMCCC Parallel Multiple Concatenated Convolutional Code Tins gives an extra redundancv to the signal a way that improves the performance of the codification (increasing the coding gam)
  • Trellis Coding Modulation constellations of more than 2 points such as Quadrature Amplitude Modulation (QAM), and Quaternary
  • Phase Shift Kevmg (QPSK) are used to mcrease the bit rate at the cost ol smaller Euclidean distances (distance between adjacent points in a signal constellation) Coding techniques are used to decrease transmission errors, when transmitting over power-limited channels
  • Trellis Coding combines coding and modulation to improve bit error rate performance
  • the basic idea behind Trellis Codmg is to introduce controlled redundancy in order to reduce channel error rates What sets Trellis Codes apart is that this technique introduces redundancy by doubling the number of signal points m the
  • the actual (noisv) received signal will tend to be somewhere around the "correct” signal point
  • the receiver chooses the signal point closest to the noisv received signal As more points are added to the signal constellation and the power is kept constant, the probability of error increases, because the Euclidean distance (distance between adjacent signal pomts) "d" is decreased and the receiver has a more difficult job making the co ⁇ ect decision Thus it would make sense, that the Euclidean distance "d" dominates the probability of error expressions
  • Trellis Codmg expands on this concept to increase the Euclidean path distance for a more thorough de ⁇ vation ol the probability of error
  • both error expressions depend on the signal spacing d and that the probability for QPSK errors is higher (not surp ⁇ sing since the signal spacing is smaller)
  • Trellis coding enables us to recover from this increase in probability of error
  • Trellis Coding uses 2 *M possible symbols for the same factor-ot-M reduction of bandwidth (and each signal is still transmitted during the same signaling pe ⁇ od)
  • Trellis Codmg provides controlled redundancy, by doubling the number of signal pomts
  • Trellis codmg defines the way in which signal transitions are allowed to occur (Signal transitions that do not follow this scheme will be detected as errors) This is best explained using the Trellis Coded 8-PSK example
  • the 8-PSK signal constellation is show in Figure 7, where we can see the individual signal pomts
  • the received signal includes noise and will tend to be located somewhere around the state pomts
  • the receiver again has to make a decision based on which signal point is closest and a mistaken output state value will be chosen if the receiver made an incorrect decision
  • Figure 12 show when case “1 " followed by “2” is received instead of the transmitted "7" - “7” sequence
  • the Euclidean distance (see Figure 7 for an illustration of the Euclidean distances and Figure 8 for the Trellis Diagram) for this path is
  • the only remaining error event is the single interval "3" instead of "7" error event, which has a Euclidean distance of 2 (see Figure 7)
  • the minimum Euclidean distances for a trellis is the minimum tree Euclidean distance "d E " (simi to the minimum Iree distance m convolutional coding)
  • d ⁇ l 608
  • Tins is a low coding gain for the amount ol overhead required to handle Trellis coding
  • the present invention compnses forward error correction techniques in data transmission over wired systems using an optional Reed Solomon encoder as an outer encoder and a multiple concatenated convolutional encoder (MCCC) (m se ⁇ al or parallel configuration) as an inner encoder
  • MCCC multiple concatenated convolutional encoder
  • Optional Reed-Solomon outer encoder we mean that it could be present or not
  • ADSL DMT Discrete Multi-Tone, multiple-camer
  • CAP/QAM single-earner
  • other xDSL systems HDSL, VDSL, HDSL2, etc
  • other wired commumcation systems wireless systems and satellite systems
  • ADSL modems are designed to operate between a Central Office CO (or a similar point of presence) and a customer premises CPE As such they use existmg telephone network wiring between the CO and the CPE
  • modems in this class which function in generally similar manner All of these modems transmit their signals usuallv above the voice band As such
  • TCM Trellis Coded Modulation
  • Figure 1 shows a BPSK signal constellation
  • Figure 2 shows a QPSK signal constellation
  • Figure 4 shows a QPSK signal constellation.
  • Figure 8 shows a two-state Trellis 8-PSK system
  • Figure 9 shows an error Event "5" ⁇ "6" in a 2 states Trellis encoding
  • Figure 10 shows an error Event “ 1" ⁇ '6" in a 2 states Trellis encoding
  • Figure 11 shows an error Event "5" ⁇ "2” in a 2 states Trellis encoding
  • Figure 12 shows an error Event "1" -> “2” in a 2 states Trellis encoding
  • Figure 13 shows a four-state Trellis, 8-PSK system
  • Figure 14 shows a se ⁇ al Concatenated (n,k,N) block code
  • Figure 15 shows the action ol a uniform mterleaver of length 4 on sequences of weight 2
  • Figure 16 shows a senally Concatenated (n,k,N) Convolutional code
  • Figure 17 shows a code sequence in A uy
  • Figure 24 shows an analytical bounds for SMCCC4
  • Figure 25 shows a PMCCC
  • Figure 26 shows a transmission svstem structure
  • Figure 27 shows notations in a transmission svstem structure
  • Figure 28 shows a PMCCC of three convolutional codes
  • Figure 29 shows a signal flow graph lor extrinsic information
  • Figure 30 shows an iterative decoder structure tor three parallel concatenated codes
  • Figure 31 shows an iterative decoder structure tor two parallel concatenated codes
  • Figure 32 shows a convergence of turbo coding bit-error probability versus number of iterations for va ⁇ ous Ei No using the SW2-BCJR algo ⁇ thm
  • Figure 33 shows a convergence of turbo coding bit-error probability versus number of iterations for va ⁇ ous Ei No usmg the SWAL2-BCJR algo ⁇ thm
  • Figure 34 shows a bit-error probability as a function of the bit signal-to-noise ratio using the SW2-BCJR and
  • Figure 35 shows a number of iterations to achieve several bit-error probabilities as a function of the bit signal-to-noise ratio using the SWAL2-BCJR algo ⁇ thm
  • Figure 36 shows a Number of iterations to achieve several bit-error probabilities as a function of the bit signal-to-noise ratio usmg the SW2-BC JR algo ⁇ thm
  • Figure 37 shows a basic structure for backward computation m the log-BCJR MAP algo ⁇ thm
  • Figure 38 shows a Trellis Termination
  • Figure 39 shows an example where a block interleaver fails to "break" the mput sequence
  • Figure 42 shows three-code performance
  • Figure 43 shows a compa ⁇ son of SMCBC and PMCBC with va ⁇ ous interleaver lengths chosen so as to yield the same mput decoding delay
  • Figure 44 shows a compa ⁇ son of SMCCC and PMCCC with four-state MCCs
  • PMCCC parallel concatenated convolutional code
  • SMCCC se ⁇ al concatenated convolutional code
  • Figure 47 shows a trellis encoder
  • Figure 48 shows an edge of the trellis section
  • Figure 49 shows the soft-input soft-output (SISO) model.
  • Figure 50 shows the convergence of PMCCC-decodmg bit error probability versus the number of iterations usmg the ASW-SISO algo ⁇ thm.
  • Figure 51 shows the convergence of iterative decoding tor a se ⁇ al concatenated code bit e ⁇ or rate probability versus number of iterations using the ASW-SISO algo ⁇ thm
  • Figure 52 shows a compa ⁇ son of two rate 1/3 PMCCC and SMCCC The curves refer to six and nine iterations of the decoding algo ⁇ tlim and to an equal input decoding delay of 16,384.
  • Figure 53 shows a block diagram for a modem transmitter m accordance with this invention, for the Central Office and for STM tr ⁇ ansport
  • Figure 54 shows a block diagram for a modem transmitter in accordance with this invention, for the Central Office and for ATM transport.
  • Figure 55 shows a block diagram for a modem transmitter in accordance with this invention, for the Remote modem and for STM transport.
  • Figure 56 shows a block diagram for a modem transmitter in accordance with this invention, lor the Remote modem and for ATM transport.
  • Figure 57 shows an ATU-C functional interfaces tor STM transport at the V-C reference point.
  • Figure 58 shows an ATU-C functional interfaces to the ATM laver at the V-C reference point.
  • Figure 59 shows an ATM cell delineation state macnme
  • Figure 69 shows an Example implementation of the Admeasurement
  • Figure 61 shows an ADSL superframe structure - ATU-C transmitter.
  • Figure 62 shows a fast synchronization byte ("fast byte") format - ATU-C transmitter.
  • Figure 63 shows an interleaved synchronization byte ("sync bvte") tormat - ATU-C transmitter.
  • Figure 64 shows a fast data buffer - ATU-C transmitter,
  • Figure 65 shows an mterleaved data buffer, ATU-C transmitter.
  • Figure 66 shows a scrambler
  • Figure 67 shows a tone ordering and bit extraction example (without trellis codmg).
  • Figure 68 shows a tone ordering and bit extraction example (with trellis coding)
  • Figure 69 shows a conversion of u to v and w
  • Figure 70 shows a finite state machine for Wei's encoder
  • Figure 71 shows a convolutional Encoder
  • Figure 72 shows a trellis diagram
  • Figure 74 shows an expansion of point n mto the next larger square constellation.
  • Figure 77 shows a MTPR test
  • Figure 78 shows ⁇ ATU-R functional interfaces for STM transport at the T-R reference point
  • Figure 79 shows an ATU-R functional interfaces to the ATM layer at the T-R reference point
  • Figure 80 shows a fast data buffer - ATU-R transmitter
  • Figure 81 shows an interleaved data buffer - ATU-R fransmitter
  • Figure 82 shows two parallel Concatenated convolutional Encoder
  • Figure 83 shows a conversion of u to v and w m the PMCCC encoder
  • Figure 84 shows a decoder for PMCCC
  • Figure 85 shows the convergence of "constellation" mterleaver for PMCCC
  • Figure 86 shows an interleaver for PMCCC
  • Figure 87 shows a Se ⁇ al Convolutional Concatenated Encoder
  • Figure 88 shows a decoder for SMCCC
  • Figure 89 shows an interleaver for SMCCC
  • Figure 90 shows the Convolutional Concatenated Encoder used for simulations.
  • Figure 91 shows simulations tor PMCCC
  • Figure 92 shows the Convolutional encoder uses for simulations
  • the overall SMCBC is then an (n, k) code, and we will refer to it as the (n, k, N) code C s , including also the mterleaver length
  • the CCs are linear, so that the SMCBC also is lmear and the umform error property applies, I e , the bit-e ⁇ or probability can be evaluated assuming that the all-zero codeword has been transmitted
  • the output- word of the outer code and the mput word of the inner code share the same weight
  • Use of the uniform mterleaver permits the computation of the "average" performance of SMCBCs, mtended as the expectation of the performance of SMCBCs usmg the same MCCs, taken over the ensemble of all rnterleavers of a given length It can be proof the meaningfulness of the average performance, in the sense that there will always be, for each value of the signal-to-noise ratio, at least one particular interleaver yielding performance better than or equal to
  • a c,(W,H) ⁇ AZ' W W H h (1) wh where A ⁇ ,' h i the number of codewords of the SMCBC with weight h associated with an input word of weight w
  • CWEF conditional weight enumerating function
  • each codeword of the outer code C Through the action of the umform mterleaver, enters the mner encoder generatmg ( codewords of the inner code C,
  • the number A w ' h of codewords of the SMCBC of weight h associated with an input word of weight w is given by
  • a c ° (W ⁇ I) is the conditional weight dist ⁇ bution of the input words that generate codewords of the outer code of weight
  • A(l, H,j) ⁇ A lkj H h (11 ) h be the weight enumerating function of sequences of the convolutional code that concatenate j error events with total mput weight / (see Figure 17), where A ihj is the number of sequences of weight h, mput weight /, and number of concatenated error events 7
  • the coefficient A ⁇ h °f me equivalent block code can be approximated (this assumption permits neglecting the length of e ⁇ or events compared to N , which also assumes that the
  • N N/p de ⁇ ves from the fact that the code has rate p/n, and thus N bits corresponds to N/p mput words or, equivalently, trellis steps
  • ⁇ M the largest number of e ⁇ or events concatenated in a codeword of weight /; and generated by a weight / mput sequence, is a function of ⁇ and / that depends on the encoder
  • d°/ is the free distance of the outer code
  • free distance d/ we mean the minimum Hamming weight of error events for convolutional CCs and the mmimum Flamming weight of codewords for block CCs
  • n' M and n°M are the maximum number of concatenated error events in codewords of the mner and outer code of weights h m and /, respectively, the following inequalities hold true
  • Equation (21 ) shows that the exponent of N co ⁇ espondmg to the mmimum weight of SMCCC codewords is always negative for 2 ⁇ df , thus cordmg an mterleaver gam at high Ei No Substitution of the exponent a (h mto Expression (16) truncated to the first term of the summation m h yields hm P effet(e) _ ⁇ B m N
  • W m is the set of mput weights w that generates codewords of the outer code with weight (h m ) Expression (22) suggests the following conclusions
  • the mmimum weight of mput sequences generating e ⁇ or events is 2 As a
  • an mput sequence of weight / can generate at most e ⁇ or events
  • Equation (29b) w M jis the maximum mput weight yielding outer codewords with weight equal to d°, , ⁇ d A' is the number of such codewords
  • N°f and wuj should be minimized
  • SMCBCs obtamed as follows a) The first is the (7m, 3m, N) SMCBC, b) The second is a (15m, 4m, N) SMCBC usmg as outer code a (5, 4) pa ⁇ tv-check code and as inner code a (15, 5) Bose-Chaudhu ⁇ -Hocquenghem (BCH) code, c) The third is a (15m, 4m, N) SMCBC using as outer code a (7, 4) Hamming code and as inner code a (15, 7) BCH code
  • SMCCC 1 is a (3.1.N) SMCCC, usmg as outer code a four-state (2,1 ) recursive, systematic convolutional encoder and as inner code a four-state (3,2) recursive, systematic convolutional encoder
  • SMCCC2 is a (3,1,N) SMCCC, usmg as outer code the same four-state (2,1 ) recursive, systematic convolutional encoder as SMCCC 1, and as mner code a four-state (3,2) nonrecursive convolutional encoder
  • SMCCC3 is a (3,1,N) SMCCC, usmg as outer code a four-state (2,1) nonrecursive, convolutional encoder, and as mner code the same four-state (3,2) recursive, systematic convolutional encoder as SMCCC 1
  • SMCCC4 is a (6,2,N) SMCCC usm
  • PMCCC Parallel Multiple Concatenated Convolutional Code
  • PMCCC Parallel Multiple Concatenated Convolutional Code
  • the algonthms work in a sliding window form (like the Viterbi algo ⁇ thm) and can thus be used to decode contmuously transmitted sequences obtamed by PMCCC, without requinng code trellis termination
  • a heu ⁇ stic explanation is also given of how to embed the maximum a posteno ⁇ algonthms into the iterative decoding of PMCCC
  • the performances of the two algo ⁇ thms are compared on the basis of a powerful rate 1/3 PMCCC Basic circuits to implement the simplified a postenon decoding algo ⁇ thm usmg lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed
  • the broad framework of this analysis encompasses digital transmission systems where the received signal is a sequence of waveforms whose co ⁇ elation extends well beyond T, the signaling pe ⁇ od
  • co ⁇ elation such as codmg, lntersymbol interference (ISI), or co ⁇ elated fading
  • ISI lntersymbol interference
  • the optimum receiver m such situations cannot perform its decisions on a svmbol-by-symbol basis, so that deciding on a particular information symbol U k involves processing a portion of the received signal Tj seconds long, with Tj>T
  • the final aim is to find suitable soft-output decoding algonthms for iterated staged decodmg of PMCCC employed in a continuous transmission
  • Both source and code sequences are defined over a tune index set K (a finite or infinite set of integers)
  • the code C can be wntten as a subset of the Cartesian product of C by itself K times, ⁇ e , C c C f
  • the channel symbols obviouslye transmitted over a stationary memorvless channel with output symbols vi
  • the channel is characte ⁇ zed bv the transitions probability dist ⁇ bution (discrete or continuous, according to the channel model) P ( ⁇ x)
  • the BC JR is the optimum algo ⁇ thm to produce the sequence of APP
  • the notations u, c, x, and v will refer to sequences n-svmbols long, and the mteger time va ⁇ able A will assume the values 1, ,n
  • the encoder admits a trellis representation with N states, so that the code sequences c (and the co ⁇ esponding transmitted signal sequences x) can be represented as paths in the trellis and uniquely associated with a state sequence 5 - (so. ,SN) whose first and last states, so and SH , are assumed to be known by the decoder
  • the demodulator supplies to the decoder the "branch met ⁇ cs" ⁇ k of Equation (38), and the decoder computes the probabilities c tk according to Equation (40)
  • the obtained values of a (S, ) as well as the ⁇ k are stored for all A, s, , and x (3)
  • the decoder recursively computes the probabilities ⁇ k accordmg to the recursion of Equation (42) and uses them together with the stored a's and ⁇ s to compute the a poste ⁇ o ⁇ transition probabilities a. k (S, , ) according to Equation (37) and, finally, the APP P k (u ⁇ y) from Equation (36)
  • the BCJR algo ⁇ thm requires that the whole sequence have been received before starting the decoding process In this aspect, it is similar to the Viterbi algonthm in its optimum version To apply it in a PMCCC, we need to subdivide the information sequence into blocks, decode them bv terminating the trellises of both CCs, and then decode the received sequence block bv block Bevond the ⁇ giditv, this solution also reduces the overall code rate
  • a more flexible decodmg strategy is offered by a modification of the BCJR algonthm in which the decoder operates on a fixed memory span, and decisions are forced with a given delay D
  • SW-BCJR sliding wmdow BCJR
  • the SW1-BCJR algo ⁇ thm requires storage of N ⁇ D values of ⁇ 's and M > ⁇ D values of the probabilities ⁇ k (x) generated by the soft demodulator Moreover, to update the ⁇ 's and ⁇ 's for each time instant, the algo ⁇ thm needs to perform
  • the Viterbi algonthm would require, in the same situation, Mx2 ko additions and Afx2 ko -way compansons, plus the trace-back operations, to get the decoded bits
  • This version of the slidmg window BCJR algonthm does not require storage of the N x D values of a's as thev are updated with a delay of D steps As a consequence, only N values of a's and M*D values of the probabilities ⁇ k (x) generated by the soft demodulator must be stored
  • the computational complexity is the same as the previous version of the algonthm
  • a k (S,) log fa k (S,)J
  • ⁇ k(S, . ) A k .,(S,) Tk(x(S, , )) + B k (S7 ( )) + Hz (52) with the following initializations
  • B k (S,) max [ B k+ ⁇ (ST( )) + r k + ⁇ (ST(u)) ⁇ + H B (55)
  • ⁇ k (S, ,u) A k .i(S,) + T k (x(S suru)) + B k (S7 (u)) + Hz (56) with the same initialization of the log-BCJR
  • Both versions of the SW-BCJR algonthm descnbed can be used, with obvious modifications, to transform the block log-BCJR and the AL-BCJR mto their slidmg wmdow versions, leadmg to the SW-log-BCJR and the SWALl-BCJR euid SWAL2-BCJR algo ⁇ thms 1 3 4 Explicit Algo ⁇ thms for Some Particular Cases
  • ⁇ k(S,,u) Ak- ⁇ (S,)+ ⁇ c m (S,,u)( ⁇ km + N km ) + Bk(St(u)) (63) where A stands for the loga ⁇ thm of the co ⁇ espondmg quantity ⁇ 1342.
  • Equation (68) ⁇ P(y 0 ⁇ u) P( y ⁇ ⁇ u) P(y 2 ⁇ u) P(y 3 ⁇ u) .
  • k 0 but, m practice, we cannot compute Equation (68) for large n because the permutations ⁇ S , ⁇ j imply that vi and .vj are no longer simple convolutional encodmgs of u
  • the MAP algo ⁇ thm approximates a nonseparable dist ⁇ bution with a separable one, however it is not clear ho good it is compared with the Kullback cross-entropy mnumizer
  • the iterative decodmg as the reliability of the fu k ⁇ improves mtuitiveh one expects that the cross-entropv between the input and the output of the MAP algo ⁇ thm will decrease, so that the approximation will unprove
  • 1 e Equation
  • Equation (69) can be obtamed
  • ⁇ lk by
  • Equation (69) f(y 2 - ⁇ o ⁇ ⁇ i ⁇ ⁇ 3 . k) + Lok " I + L 3k (73) and similarly,
  • the overall decoder is composed of block decoders D , connected m parallel, as in Figure 30 (when the switches are m position P), which can be implemented as a pipeline or by feedback
  • a se ⁇ al implementation is also shown in Figure 30 (when the switches are in position 5
  • jT 0 0
  • o 0 m ⁇ consider vo as part of v; If the systematic bits are dist ⁇ aded among encoders, we use the same dist ⁇ bution for yo among the received observations tor MAP decoders
  • Equation (77) For turbo codes with only two constituent codes, Equation (77) reduces to
  • FIG. 37 shows the implementation of Equation (50) for the forward recursion usmg a lookup table for evaluation of log(l+e" ), and subtraction of max A k fS J ⁇ from AkjS J is used for normalization to prevent buffer overflow
  • the circuit for maximization can be implemented simply by usmg a comparator and selector with feedback operation
  • Figure 38 shows the implementation of Equation (51 ) for the backward recursion, which is similar to Figure 37
  • a circuit for computation of log(P k (u ⁇ y)J from Equation (36) usmg Equation (52) for final computation of bit reliability is shown m
  • the encoder in Figure 28 may generate a n(N + M), N) block code, where the M tail bits of encoder 2 and encoder 3 are not transmitted Since the component encoders are recursive, it is not sufficient to set the last M information bits to zero in order to d ⁇ ve all the encoder to the all zero state, l e to terminate the trellis
  • the termination (tail) sequence depends on the state of each component encoder after N bits, which makes it impossible to terminate the component encoders with just Mbits This issue has not been resolved m previously proposed turbo code implementations Fortunately, the simple stratagem illustrated in Figure 33 is sufficient to terminate the trellis at the end of the block (The code shown is not important) Here the switch is m position "A" for the first N clock cycles and is in position "B" for M additional cycles, which will flush the encoders with zeros The decoder does not assume knowledge of the hi tail bits
  • the same termination method may be used for all encoders
  • each t is defined as a multiple of 1/3 If any U is not an integer, the co ⁇ esponding encoded output will have a high weight because then the convolutional code output is non-terminating (until the end of the block) If all /,'s are mtegers, the total encoded weight will be 14+2 ⁇ - ⁇ 3 /, Thus, one of the considerations m designing the mterleaver is to avoid integer t ⁇ plets (/;, h, ts) that are simultaneously small in all three components In fact, it would be nice to design an mterleaver to guarantee that the smallest value of ⁇ -i 3 /, (for integer /,) grows with the block size N
  • the bad we ⁇ ght-3 data sequences have a small probability of being matched with bad we ⁇ ght-3 permuted data sequences, even in a two-code system
  • l-(l-(6/N 2 ) q'l * (6/N)(6/N 2 ) q This implies that the minimum distance codeword of the turbo code m Figure 28 is more likely to result from a weight-2 data sequence of the form ( ..001001000.
  • Block interleavers are effective if the low-weight sequence is confined to a row
  • low-weight sequences (which can be regarded as the combination of lower weight sequences) are confined to several consecutive rows, then the ⁇ c columns of the mterleaver should be sent in a specified order to spread as much as possible the low- weight sequence
  • the sequence 1001 will still appear at the mput of the encoders for anv possible column permutation Onlv if we permute the rows of the mterleaver m addition to its columns it is possible
  • the PMCCC is a rate 1/3 code obtained concatenating two equal rate 1/2, four-state systematic recursive convolutional codes with a generator matrix as in the first row of Table 2.
  • the SMCBC is a rate 1/3 code. It is form using as an outer code the same rate 1/2, four-state code as in the PMCCC and, as an inner code, a rate 2/3, four-state systematic recursive convolutional code with a generator matrix as in the third row of Table 2.
  • the interleaver lengths have been chosen so as to yield the same decoding delay, due to the interleaver, in terms of input bits. The results are shown in Figure 44, where we plot the bit-e ⁇ or probability versus the signal-to-noise ratio Et/No for various input delays.
  • SISO soft-input soft-output
  • FIG. 45 The block diagram of a PMCCC is shown in Figure 45 (a) (the same construction also applies to block codes).
  • a rate 1/3 PMCCC is obtained using two rate 1/2 constituent codes (CCs) and an interleaver.
  • CCs constituent codes
  • interleaver For each input information bit, the codeword sent to the channel is formed by the input bit, followed by the parity check bits generated by the two encoders.
  • Figure 45 (b) the block diagram of the iterative decoder is also shown. It is based on two modules denoted by "SISO,” one for each encoder, an interleaver, and a deinterleaver performing the inverse permutation with respect to the interleaver.
  • the SISO module is a four-port device (quadriport), with two inputs and two outputs.
  • quadriport accepts as inputs the probability distributions of the information and code symbols labeling the edges of the code trellis, and forms as outputs an update of these dist ⁇ butions based upon the code constraints
  • Figure 45 (b) can be seen that the updated probabilities of the code symbols, are never used by the decodmg algonthm 2 22 Senallv Multiple Concatenated Codes
  • FIG. 46 (a) The block diagram of a SMCCC is shown m Figure 46 (a) (the same construction also applies to block codes)
  • a rate 1/3 SMCCC is obtamed usmg as an outer encoder a rate 1/2 encoder, and as an mner encoder a rate 2/3 encoder
  • An mterleaver permutes the output codewords of the outer code before passmg them to the inner code
  • the block diagram of the iterative decoder is shown It is based on two modules denoted by "SISO", one for each encoder, an mterleaver, and a deinterleaver
  • the SISO module is the same as desc ⁇ bed before In this case, though, both updated probabilities of the input and code symbols are used m the decoding procedure 2 2 3 Soft-Output algo ⁇ thms
  • the SISO module is based on MAP algo ⁇ thms
  • These algo ⁇ thms perform both forward and backward recursions and, thus, require that the whole sequence be received before starting the decoding operations As a consequence, they can onlv be used m block-mode decodmg
  • the memory requirement and computational complexity grow linearly with the sequence length
  • Some algo ⁇ thms require only a forward recursion, so that it can be used in continuous-mode decoding However, its memory and computational complexity grow exponentially with the decoding delav
  • va ⁇ ous forms of suboptunum soft-output algo ⁇ thms can be used Two approaches have been taken The first approach tnes to modify
  • the dynamics of a time-inva ⁇ ant convolutional code are completely specified by a smgle trellis section, which descnbes the transitions (edges) between the states of the trellis at time instants A and k +1
  • a Trel sO section is characterized by the following
  • the SISO module is a four-port device that accepts at the mput the sequences of probability dist ⁇ butions P (c, I) P (u, I) and outputs the sequences of probability dist ⁇ butions P (c, O) P (u, O) based on its mputs and on its knowledge of the trellis section (or code m general)
  • the algo ⁇ thm by which the SISO operates in evaluating the output dist ⁇ butions will be explained in two steps In the first step, we consider the following algo ⁇ thm
  • the new probability dist ⁇ butions P k (u.O) and P k (c.O) represent a smoothed version of the input dist ⁇ butions P k
  • bit extnnsic information is de ⁇ ved from the symbol ext ⁇ nsic information usmg Equations (84) and (85)
  • a rate A ⁇ / ⁇ 0 trellis encoder such that each input symbol U compnses of k 0 bits and each output symbol C compnses ot n culinary bits
  • Equation (86) is not used for those encoders in a concatenated coded system connected to a channel
  • Pk c(e),I] is not represented as a product
  • SW-SISO Sliding- Window Soft-Input Soft-Output Module
  • SW-SISO s ding-wmdow soft-input soft-output
  • SW-SISOl First Version of the Shding-Window SISO Algo ⁇ thm
  • SW-SISQ2 Second Simplified Version of the Shding-Window SISO Algonthm
  • Equations (84) and (85) and Equations (80) and (81) becomes the following-
  • the quantities h c and ⁇ -- are normalization constants needed to prevent excessive growth of the nume ⁇ cal values of the a's and ⁇ 's
  • Equation (101) To evaluate a in Equation (101), we can use two approximations, with mcreasmg accuracy (and complexity)
  • Equation (101) can be written as * faj.
  • the second term, ⁇ ( ai , a ,. .. a ⁇ ) is called the co ⁇ ection term and can be computed using a look-up table, as discussed above
  • max can be replaced by max* in Equations (103) through (106)
  • the overall PMCCC forms a very powerful code for possible use in applications requinng reliable operation at very low signal-to-noise ratios
  • the performance of the continuous iterative decodmg algo ⁇ thm, applied to the concatenated code, is obtamed bv simulation, using the ASW-SISO and the look-up table algonthms It is shown in Figure 50, where we plot the bit-e ⁇ or probability as a function of the number of iterations of the decodmg algo ⁇ thm for va ⁇ ous values of the bit signal-to-noise ratio, E b No It can be seen that the decoding algo ⁇ thm converges up to an e ⁇ or probability of 10 5 , for signal-to-noise ratios of 0 2 dB with nine iterations Moreover, convergence is guaranteed also at signal-to-noise ratios as low as 0 05 dB, which is
  • G(D) [ 1 + D + D 3 1 + D J and as an inner code, the rate 1/2, 8-state recursive encoder with generating matrix
  • the resulting SMCCC has rate 1/4
  • the mterleaver length has been chosen to ensure a decodmg delav in terms of mput mformation bits equal to 16,384
  • Figures 53, 54, 55 and 56 are models for facilitating accurate and concise DMT signal waveform desc ⁇ ptions
  • DMT sub-earner ⁇ defined in the frequency domam
  • x consumer is the w" 1 IDFT output sample (defined m the time domam)
  • the DAC and analog processing block construct the contmuous transmit voltage waveform co ⁇ espondmg to the discrete digital input samples More precise specifications for these analog blocks a ⁇ se indirectly from the analog transmit signal linea ⁇ ty and power spectral density specifications
  • the use of Figures 53, 54, 55 and 56 as a transmitter reference model allows all initialization signal waveforms to be desc ⁇ bed through the sequence of DMT symbols, ⁇ Z, ⁇ , required to produce that signal Allowable differences in the characte ⁇ stics of different digital to analog and analog processing blocks will produce somewhat different contmuous-time voltage waveforms for the same initialization signal 3 1
  • ATU-C transmitter reference models ATM and STM are application options ATU-C and A
  • ATM cell transport 3 1 1 ATU-C transmitter reference model for STM transport
  • Figure 53 is a block diagram of an ADSL Transceiver Unit-Central office (ATU-C) transmitter showing the functional blocks and interfaces for the downstream transport of STM data
  • ATU-C ADSL Transceiver Unit-Central office
  • the basic STM transport mode is bit se ⁇ al
  • the framing mode used determines if byte bounda ⁇ es, if present at the
  • V-C interface shall be preserved Outside the ASx/LSx senal interfaces data bvtes are transmitted MSB first All se ⁇ al processing in the ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered bv the ADSL as LSB As a result, the first incoming bit (outside world MSB) shall be the first processed bit inside the ADSL (ADSL LSB)
  • ADSL equipment shall support at least bearer channels AS0 and LS0 downstream Support of other bearer channels is optional Two paths are shown between the Mux/Svnc control and Tone orde ⁇ ng, the "fast" path provides low latencv the interleaved path provides verv low e ⁇ or rate and greater latency
  • An ADSL system supporting STM shall be capable of operatmg m a dual latencv mode for the downstream direcUon, m which user data is allocated to both paths (1 e fast and mterleave
  • FIG 54 is a block diagram of an ADSL Transceiver Unit-Central office (ATU-C) transmitter showing the functional blocks and interfaces that are referenced in ITU-T G 992 1 Recommendation for the downstream transport of ATM data Byte bounda ⁇ es at the V-C interface shall be preserved m the ADSL data frame.
  • ATU-C ADSL Transceiver Unit-Central office
  • ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB
  • the first incoming bit (outside world MSB)
  • the CLP bit of the ATM cell header will be earned in the MSB of the ADSL frame byte (l e , processed last)
  • ADSL equipment shall support at least bearer channel ASO downstream)
  • Two paths are shown between the Mux/Svnc control and Tone ordermg, the "fast" path provides low latency, the interleaved path provides very low e ⁇ or rate and greater latency
  • An ADSL system supporting ATM transport shall be capable of operatmg m a smgle latency mode, m which all user data is allocated to one path (l e fast or interleaved)
  • ATM and STM are application options ATU-C and ATU-R may be configured for either STM bit sync transport or ATM cell transport 3 2 1 ATU-R transmitter reference model for STM transport
  • Figure 55 show a block diagram of an ATU-R transmitter showing the functional blocks and interfaces that are referenced m this Recommendation tor the upstream transport of STM
  • the basic STM transport mode is bit se ⁇ al
  • the framing mode used determines if byte boundanes, if present at the V-C interface, shall be preserved Outside the LSx senal interfaces data bvtes are MSB transmitted first All se ⁇ al processing m the ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB As a result, the first incoming bit (outside world MSB) will be the first processed bit mside the ADSL (ADSL LSB) ADSL equipment shall support at least bearer channel LSO upstream Two paths are shown between the Mux/Sync control and Tone ordermg, the "fast" path provides low latency, the interleaved path provides very low e ⁇ or rate and greater latency An ADSL system supportmg STM shall be capable of operating in a dual latency mode for the downstream direction, m which user data is allocated to both paths (l e fast
  • FIG. 56 show a block diagram ot an ATU-R transmitter showing the functional blocks and interfaces that are referenced in this Recommendation for the upstream transport of ATM data
  • Byte bounda ⁇ es at the T-R interface shall be preserved in the ADSL data trame Outside the LSx senal interlaces data bvtes are transmitted MSB first in accordance with ITU-T Recommendations I 361 and 1432 All se ⁇ al processing m the ADSL frame (e g CRC, scrambling etc ) shall however, be performed LSB first with the outside world MSB considered bv the ADSL as LSB As a result the first incoming bit (outside world MSB) will be the first processed bit inside the ADSL (ADSL LSB), and the CLP bit of the ATM cell header will be earned m the MSB of the ADSL frame bvte (1 e , processed last) ADSL equipment shall support at least bearer channel LSO upstream Two paths are shown between the Mux/Sync control
  • An ADSL system may transport up to seven user data streams on seven bearer channels simultaneously up to four independent downstream simplex bearers (unidirectional from the network operator (I e V-C interface) to the CI (I e T-R interface))
  • An ADSL system may transport up to three duplex bearers (bi-directional between the network operator and the CI)
  • the three duplex bearers may alternatively be configured as independent unidirectional simplex bearers, and the rates of the bearers m the two chrections (network operator toward CI and vice versa) do not need to match
  • All bearer channel data rates shall be programmable m any combmation of integer multiples ot 32 kbit/s
  • the ADSL data multiplexing format is flexible enough to allow other transport data rates, such as channelizations based on existmg 1 544 Mbit/s, but the support of these data rates (non-mteger multiples of 32 kbit/s) will be limited by the ADSL system's available capacity for synchronization
  • the maximum net data rate transport capacity of an ADSL system will depend on the characteristics of the loop on which the system is deployed, and on certam configurable options that affect overhead
  • the ADSL bearer channel rates shall be configured during the initialization and training procedure
  • the transport capacity of an ADSL system per se is defined only as that of the bearer channels When, however, an ADSL system is mstalled on a lme that also carnes POTS or ISDN signals the overall capacity is that of POTS or ISDN plus ADSL
  • An ATU-x shall be configured to support STM transmission or ATM transmission Bearer channels configured to transport STM data can also be configured to carry ATM data ADSL equipment may be capable of simultaneously supporting both ATM and STM transport
  • an ADSL system may transport a Network Timing Reference (NTR) 3 3 1 Transport of STM data
  • ADSL systems transporting STM shall support the simplex bearer channel ASO and the duplex bearer channel LSO downstream Bearer channels ASO, LSO, and any other bearer channels supported shall be mdependently allocable to a particular latency path as selected by the ATU-C at start-up
  • the system shall support dual-latency downstream
  • ADSL systems transporting STM shall support the duplex bearer channel LSO upstream using a single latency path Bearer channel
  • ASO shall support the transport of data at all mteger multiples of 32 kbit/s from 32 kbit/s to 6144 kbit/s
  • Bearer channel LSO shall support 16 kbit/s and all integer multiples of 32 kbit/s from 32 kbit/s to 640 kbit/s
  • thev shall support the range of integer multiples of 32 kbit/s shown in Table 4 Support for data rates based on non-integer multiples of 32 kbit s is also optional Table 4 shows the required 32 kbit/s integer multiples for transport ol STM Table 4 Required 32 kbitts integer multiples tor transport of STM
  • Table 5 illustrates the data rate terminology and definitions used for STM transport Table 5 Data Rate Terminology for STM transport
  • STMdata rate "Net data rate” ⁇ (B ⁇ ,B F ) X 32 ASx + LSx (NOTE)
  • Total data rate Lme rate ⁇ b , X 4 U overhead rate
  • An ADSL system transporting ATM shall support the single latency mode at all integer multiples of 32kb ⁇ t/s up to
  • ATM data shall be mapped to bearer channel ASO m the downstream direction and to bearer channel LSO m the upstream direction
  • ASO m the downstream direction
  • LSO m the upstream direction
  • One of three different “latency classes” may be used Smgle latency, not necessanly the same for each direction of transmission, Dual latency downstream, single latency upstream, Dual latency both upstream and downstream
  • ADSL systems transporting ATM shall support bearer channel ASO downstream and bearer channel LSO upstream, with each of these bearer channels independently allocable to a particular latency path as selected by the ATU-C at start-up Therefore, support of dual latency is optional for both downstream and upstream
  • onlv bearer channel ASO shall be used, and it shall be allocated to the appropnate latencv path
  • downstream ATM data are transmitted through both latency paths (I e , 'fast' and 'interleaved')
  • onlv bearer channels ASO and ASl shall be used, and thev shall be allocated to different latencv paths
  • upstream ATM data are transmitted through a single latencv path (I e , 'fast' only or 'interleaved' onlv)
  • onlv bearer channel LSO shall be used and it shall be allocated to the appropnate latencv path
  • the choice of the fast or interleaved path mav be made mdependentlv ot the choice for the downstream data
  • upstream ATM data are transmitted through both latency paths (1 e , 'fast' and 'mterle
  • Bearer channel ASO shall support the transport ot data at all integer multiples of 32 kbit/s from 32 kbit/s to 6144 kbit/s
  • Bearer channel LSO shall support all integer multiples of 32 kbit/s from 32 kbit/s to 640 kbit/s Support for data rates based on non-integer multiples of 32 kbit/s is also optional
  • thev shall support the range of integer multiples ot 32 kbit/s shown in Table 4 Data rates based on non-integer multiples of 32 kbit/s is optional
  • Bearer channels AS2, AS3 and LS2 shall not be provided for an ATM based ATU-x
  • Table 6 illustrates the data rate terminology and defimtions used for ATM transport Table 6- Data Rate Terminology for ATM transport
  • Trellis Coding overhead Lme rate ⁇ b ⁇ X 4 U rate
  • the total bit rate transmitted by the ADSL system when operatmg in an optional reduced-overhead framing mode shall include capacity for the data rate transmitted m the ADSL bearer channels and ADSL system overhead (which includes an ADSL embedded operations channel, EOC, an ADSL overhead control channel, AOC, CRC check bytes, fixed mdicator bits for OAM, FEC redundancy bytes)
  • ADSL system overhead which includes an ADSL embedded operations channel, EOC, an ADSL overhead control channel, AOC, CRC check bytes, fixed mdicator bits for OAM, FEC redundancy bytes
  • the total bit rate shall also include capacity for the synchronization control bytes and capacity for bearer channel synchronization control
  • An ATU-C may support STM transmission or ATM transmission or both Framing modes that shall be supported, depend upon the ATU-C be g configured for either STM or ATM transport If framing mode k is supported, then modes k- 1 , , 0 shall also be supported
  • the ATU-C and ATU-R shall indicate a framing mode number 0, 1, 2 or 3, which thev intend to use The lowest indicated framing mode shall be used
  • Usmg framing mode 0 ensure than an STM based ATU-x with an external ATM TC will mteroperate with an ATM based ATU-x Additional modes of interoperation are possible depending upon optional features provided in either ATU-x
  • NTR Network Timing Reference
  • LSO Three input and output data interfaces are defined at the ATU-C for the duplex channels supported by the ADSL system, LSO, LSI, and LS2 (LSx in general) LSO is also known as the "C" or control channel It car ⁇ es the signaling associated with the ASx bearer channels and it may also carry some or all of the signaling associated with the other duplex bearer channels 3 4 1 4 Payload transfer delav
  • the one-way transfer delay for payload bits m all bearers (simplex and duplex) from the V reference pomt at central office end (V-C) to the T reference pomt at remote end (T-R) for channels assigned to the fast buffer shall be no more than 2 ms
  • An ATU-C configured for STM transport shall support the full overhead frammg structure 0
  • the support of full overhead frammg structure 1 and the reduced overhead frammg structures 2 and 3 is optional Preservation of V-C mterface byte bounda ⁇ es (if present) at the U-C mterface may be supported for any of the U-C interface frammg structures
  • An ATU-C configured for STM transport may support insertion of a Network Tuning Reference (NTR) 3 42 ATM Transmission Protocol Specific functionalities
  • the functional data interfaces at the ATU-C for ATM is shown m Figure 58
  • the ATM channel ATM0 shall alwavs be provided, the channel ATM1 may be provided for support of dual latency mode
  • Each channel operates as an mterface to a physical layer pipe
  • no fixed allocation between the ATM channels 0 and 1 on one hand and transport of 'fast' and 'mterleaved' data on the other hand is assumed This relationship is configured inside the ATU-C
  • Flow control functionality shall be available on the V reference point to allow the ATU-C (l e the physical layer) to control the cell flow to and from the ATM layer.
  • This functionality is represented by Tx_Cell_Handshake and
  • Rx_Cell_Handshake A cell may be transfe ⁇ ed from ATM to PHY layer only after the ATU-C has activated the
  • Tx_Cell_Handshake Similarly a cell may be transfe ⁇ ed from the PHY laver to the ATM layer only after the Rx_Cell_Handshake This functionalitv is important to avoid cell overflow or underflow in the ATU-C and ATM layers
  • the one-way transfer delav (excluding cell specific functionalities) tor pavload bits in all bearers (simplex and duplex) from the V reference point at central office end (V-C) to the T reference point at remote end (T-R) for channels assigned to the fast buffer shall be no more than 2 ms
  • Idle cells shall be inserted in the transmitter direction for cell rate de-coupling. Idle cells are identified by the standardized pattern for the cell header given in ITU-T Recommendation 1.432. 3.4.2.3.2. Header E ⁇ or Control (HEC) Generation.
  • HEC Header E ⁇ or Control
  • the HEC byte shall be generated in the transmit direction as described in ITU-T Recommendation 1.432, including the recommended modulo 2 addition (XOR) of the pattern 01010101 to the HEC bits.
  • the generator polynomial coefficient set used and the HEC sequence generation procedure shall be in accordance with ITU-T Recommendation 1.432.
  • bits timing and ordering When interfacing ATM data bytes to the ASO or ASl bearer channel, the most significant bit (MSB) shall be sent first.
  • the ASO or ASl bearer channel data rates shall be integer multiples 32 kbit/s, with bit timing synchronous with the ADSL downstream timing base.
  • the cell delineation function permits the identification of cell boundaries in the payload. It uses the HEC field in the cell header. Cell delineation shall be performed using a coding law checking the HEC field in the cell header according to the algorithm described in ITU-T Recommendation 1.432.
  • the ATM cell delineation state machine is shown in Figure 59.
  • the delineation process is performed by checking bit by bit for the co ⁇ ect HEC. Once such .an agreement is found, it is assumed that one header has been found, and the method enters the PRESYNC state.
  • the cell delineation process may be performed byte by byte.
  • the delineation process is performed by checking cell by cell for the co ⁇ ect HEC. The process repeats until the co ⁇ ect HEC has been confirmed DELTA times consecutively. If an incorrect HEC is found, the process returns to the HUNT state.
  • the HEC covers the entire cell header.
  • the code used for this function is capable of either: single bit e ⁇ or co ⁇ ection or multiple bit e ⁇ or detection.
  • E ⁇ or detection shall be implemented as defined in ITU-T Recommendation 1.432 with the exception that any HEC e ⁇ or shall be considered as a multiple bit e ⁇ or, and therefore, HEC e ⁇ or co ⁇ ection shall not be performed.
  • An ATU-C configured for ATM transport shall support the full overhead framing structures 0 and 1.
  • the support of reduced overhead framing structures 2 and 3 is optional.
  • the ATU-C transmitter shall preserve V-C interface byte boundaries (explicitly present or implied by ATM cell boundaries) at the U-C interface, independent of the U-C interface framing structure.
  • transporting ATM cells and not preserving T-R byte boundaries at the U-R interface shall indicate during initialization that frame structure 0 is the highest frame structure supported
  • An STM ATU-R transporting ATM cells and preserving T-R byte bounda ⁇ es at the U-R mterface shall indicate during initialization that frame structure 0, 1 , 2 or 3 is the highest frame structure supported
  • An ATM ATU-C receiver operating m frammg structure 0 can not assume that the ATU-R transmitter will preserve T-R mterface byte boundanes at the U-R mterface and shall therefore perform the cell delineation bit-by-bit
  • An ATU-C configured tor ATM transport may support insertion ol a Network Timmg Reference (NTR) 3 4 3 Network tinung reference (NTR) 3 4 3 1 Need for NTR
  • VTOA Voice and Telephony Over ATM
  • DVC Desktop Video Conferencing
  • the ADSL system may transport an 8 kHz timmg marker as NTR This 8 kHz timmg marker mav be used for voice/video playback at the decoder (D/A converter) m DVC and VTOA applications
  • the 8 kHz timmg marker is mput to the ATU-C as part of the mterface at the V-C reference pomt 3 4 3 2 Transport of the NTR
  • the intention of the NTR transport mechanism is that the ATU-C provides timmg mformation at the U-C reference pomt to enable the ATU-R to deliver to the T-R reference pomt timing mformation that has a tinung accuracy co ⁇ espondmg to the accuracy of the clock provided to the V-C reference pomt
  • the NTR shall be inserted m the U-C frammg structure as follows a) The ATU-C may generate an 8 kHz local tinung reference (LTR) by dividing its sampling clock by the appropnate integer (276 if 2 208 MHz is used), b) It shall transmit the change m phase offset between the mput NTR and LTR (measured In cycles of the 2 208 MHz clock, that is units of approximately 452 ns) from the previous superframe to the present one This shall be encoded mto four bits ntr3 - ntrO (with ntr3 the MSB), representing a signed integer in the
  • This subclause specifies framing of the downstream signal (ATU-C transmitter)
  • Two types of framing are defined full overhead and reduced overhead
  • two versions of full overhead and t o versions ot reduced overhead are defined
  • the four resulting framing modes are defined in Table 8, and shall be refe ⁇ ed to as framing modes 0 1 , 2 and 3 Table 8- Definition of framing modes
  • the ATU-C shall indicate during initialization the highest framing structure number it supports. If the ATU-C indicates it supports framing structure A, it shall also support all framing structures A-l to 0. If the ATU-R indicates a lower framing structure number during initialization, the ATU-C shall fall back to the framing structure number indicated by the ATU-R.
  • ASx LSx serial interfaces data bytes are transmitted MSB first in accordance with ITU-T Recommendations G.703, G.709, 1.361, and 1.432.
  • All serial processing in the ADSL frame (e.g., CRC, scrambling, etc.) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB.
  • the first incoming bit (outside world MSB) will be the first processed bit inside the ADSL (ADSL LSB).
  • Figures 53 and 54 show functional block diagrams of the ATU-C transmitter with reference points for data framing.
  • Up to four downstream simplex data channels and up to three duplex data channels shall be synchronized to the 4 kHz ADSL DMT frame rate, and multiplexed into two separate data buffers (fast and interleaved).
  • a cyclic redundancy check (CRC), scrambling, and forward e ⁇ or co ⁇ ection (FEC) coding shall be applied to the contents of each buffer separately, and the data from the interleaved buffer shall then be passed through an interleaving function.
  • the two data streams shall then be tone ordered, and combined into a data symbol that is input to the constellation encoder. After constellation encoding the data shall be modulated to produce an analog signal for transmission across the customer loop.
  • a bit-level framing pattern shall not be inserted into the data symbols of the frame or superframe structure.
  • DMT frame (i.e. symbol) boundaries are delineated by the cyclic prefix inserted by the modulator.
  • Superframe boundaries are determined by the synchronization symbol and shall also be inserted by the modulator, and which carries no user data.
  • the data frames i.e. bit-level data prior to constellation encoding
  • the reference points for which data framing will be described in the following subclauses is: a) A (Mux data frame): the multiplexed, synchronized data after the CRC has been inserted b) B (FEC output data frame): the data frame generated at the output of the FEC encoder at the DMT symbol rate, where an FEC block may span more than one DMT symbol period c) C (constellation encoder input data frame): the data frame presented to the constellation coder.
  • A Multiple data frame
  • B FEC output data frame
  • C tellation encoder input data frame
  • Each superframe is composed of 68 data frames, numbered from 0 to 67, which are encoded and modulated into DMT symbols, followed by a synchronization symbol, which carries no user or overhead bit-level data and is inserted by the modulator to establish superframe boundaries.
  • Each data frame within the superframe contains data from the fast buffer and the mterleaved buffer During each ADSL superframe, eight bits shall be reserved for the CRC on the fast data buffer (crc0-crc7), and 24 mdicator bits ( ⁇ b0- ⁇ b23) shall be assigned for OAM functions
  • the synchronizaton byte of the fast data buffer (“fast byte") car ⁇ es the CRC check bits m frame 0 and the mdicator bits m frames 1, 34, and 35
  • the fast byte m other frames is assigned in even-/odd-frame pairs to either the EOC or to synchronization control of the bearer channels assigned to the fast buffer
  • Bit 0 of the fast byte in an even-numbered frame (other than frames 0 and 34) and bit 0 of the fast byte of the odd- numbered frame immediately following shall be set to "0" to indicate these frames carry a synchronization control information
  • CRC CRC
  • indicator bits the fast bytes of two successive ADSL frames, beginrung with an even-numbered frame, may contain mdications of "no synchronization action" or alternatively, they may be used to transmit one EOC message, consistmg of 13 bits
  • the mdicator bits are defined in Table 9 Bit 0 of the fast byte m an even-numbered frame (other than frames 0 and 34) and bit 0 of the fast byte of the odd-numbered frame immediately following shall be set to "1 " to indicate these frames carry a 13-bit EOC message plus one additional bit, rl
  • the rl bit is reserved for future use and shall be set to 1
  • the synchronization byte of the mterleaved data buffer (sync byte) carnes the CRC check bits for the previous superframe m frame 0
  • the svnc byte shall be used for synchronization control of the bearer channels assigned to the interleaved data buffer or used to carry an ADSL overhead control (AOC) channel
  • AOC ADSL overhead control
  • Each data frame shall be encoded mto a DMT symbol As is shown in Figure 61, each frame is composed of a fast data buffer and an mterleaved data buffer, and the frame structure has a different appearance at each of the reference pomts (A, B, and C)
  • the bytes of the fast data buffer shall be clocked mto the constellation encoder first, followed by the bytes of the interleaved data buffer. Bytes are clocked least significant bit first
  • Each bearer channel shall be assigned to either the fast or the mterleaved buffer during initialization, and a pan of bytes, [B F ,B ], transmitted for each bearer channel, where B and B ⁇ designate the number of bytes allocated to the fast and mterleaved buffers, respectively
  • the frame structure of the fast data buffer shall be as shown m Figure 64, for reference pomts A and B, which are defined in Figure 53 and 54
  • the fast buffer shall always contain at least the fast bvte This is followed by B F (AS0) bytes of channel ASO, then B F (AS1) bytes of channel ASl, B (AS2) bytes of channel AS2 and B F (AS3) bvtes of channel AS3 Next come the bvtes for any duplex (LSx) channels allocated to the fast buffer If any
  • B F (ASx) is non-zero
  • both an AEX and a LEY byte follow the bytes of the last LSx channel, and if any BpfLSx) is non-zero, the LEX bvte shall be included
  • B F (LS0) 255
  • no bytes are mcluded for the LSO channel
  • the 16 kbit/s C channel shall be transported m every other LEX byte on average, usmg the svnc byte to denote when to add the LEX bvte to the LSO bearer channel
  • R F FEC redundancy bytes shall be added to the mux data frame (reference pomt A) to produce the FEC output data frame (reference pomt B), where Rp is given m the options used during initialization
  • the constellation encoder mput data frame (reference pomt C) is identical to the FEC output data frame (reference pomt B) 3 44 1 2 2 Interleaved data buffer (with full overhead)
  • the interleaved data buffer shall always contain at least the sync byte
  • the rest of the buffer shall be built m the same manner as the fast buffer substituting B ⁇ in place of ⁇ p
  • the length of each mux data frame is A' bytes, as defined m Figure 65
  • the FEC output Data Frame (reference pomt B) shall partially overlap two mux data frames tor all except the last frame, which shall contain the ⁇ j FEC redundancy bytes
  • the FEC output data frames are mterleaved to a specified mterleave depth
  • the interleaving process delays each bvte of a given FEC output data frame a different amount, so that the constellation encoder input data frames will contam bvtes from manv different FEC data frames
  • mux data frame 0 of the interleaved data buffer is aligned with the ADSL superframe and mux data frame 0 ot the fast data buffer (this is not true at reference point C)
  • the interleaved data buffer will be delayed bv (S interleave depth 250) ms with respect to the fast data buffer, and data frame 0 (containing the CRC bits for the interleaved data buffer)
  • D is the delay operator That is, CRC is the remainder when M(D) ⁇ fi is divided by G(D).
  • the CRC check bits are transported m the synchronization bytes (fast and mterleaved. 8 bits each) of frame 0 for each data buffer
  • the bits (l e message polynomials) covered by the CRC m clude
  • Each byte shall be clocked into the CRC least significant bit first
  • the number of bits over which the CRC is computed vanes with the allocation of bytes to the fast and mterleaved data buffers (the numbers of bytes in ASx and LSx vary accordmg to the [B Bj] pairs, AEX is present m a given buffer onlv if at least one ASx is allocated to that buffer, LEX is present m a given buffer only if at least one ASx or one LSx is allocated to that buffer)
  • CRC field lengths over an ADSL superframe will vary from approximately 67 bytes to approximately 14,875 bvtes 3 44 2 Synchronization
  • the mput data streams shall be synchronized to the ADSL tinung base usmg the synchronization control mechanism (consisting of synchronization control byte and the AEX and LEX bytes) Forward-e ⁇ or-co ⁇ ection coding shall always be applied to the synchronization control byte(s)
  • the synchronization control mechanism is not needed, and the synchronization control bvte shall always indicate no synchronization action" (see Table 10 and Table 1 1 ) 3 4 4 2 1 Synchronization for the fast data buffer
  • Synchronization control for the fast data buffer may occur m frames 2 through 33, and 36 through 67 of an ADSL superframe, where the fast byte may be used as the synchronization control byte No synchronization action shall be taken for those frames for which the fast byte is used for CRC, fixed mdicator bits, or EOC
  • ADSL deployments may need to inter-work with DSl (1 544 Mbit/s) or DSIC (3 152 Mbit/s) rates
  • the synchronization control option that allows adding up to two bytes to an ASx bearer channel provides sufficient overhead capacity to transport combinations of DSl or DSIC channels transp.arently (without mterpretmg or st ⁇ ppmg and regeneratmg the frammg embedded within the DSl or DSIC)
  • the synchronization control algo ⁇ thm shall, however, guarantee that the fast byte in some minimum number of frames is available to carry EOC frames, so that a minimum EOC rate (4 kbit/s) may be maintained
  • the LSO bearer channel is transported in the LEX byte, usmg the "add LEX byte to designated LSx channel", with LSO as the designated channel, every other frame on average
  • the synchronization control byte shall indicate "no synchronization action" (1 e , sc7-0 coded "XX001 IXO 2 ", with X discretionary) 3 4 4 2 2 Synchronization for the interleaved data buffer
  • Synchronization control for the interleaved data buffer can occur in frames 1 through 67 of an ADSL superframe, where the svnc bvte may be used as the synchronization control bvte No synchronization action shall be taken du ⁇ ng frame 0, where the svnc bvte is used for CRC du ⁇ ng frames when the LEX bvte carnes the AOC
  • the format ot the sync bvte when used as synchronization control for the mterleaved data buffer shall be as given m Table 11 In the case where no signals are allocated to the mterleaved data buffer, the sync byte shall carry the AOC data directly, as shown m Figure 63
  • ADSL deployments may need to mter-work with DSl ( 1 544 Mbit s) or DSIC (3 152 Mbit/s) rates
  • the synchronization control option that allows adding up to two bytes to an ASx bearer channel provides sufficient overhead capacity to transport combmations of DSl or DSIC channels transparently (without mterpretmg or shipping and regeneratmg the frammg embedded within the DSl or DSIC)
  • the data rate of the C ch.annel is 16 kbit/s
  • the LSO bearer channel is transported m the LEX bvte, usmg the
  • bit timmg base of the mput bearer channels (ASx, LSx) is synchronous with the ADSL modem timmg base then
  • the synchronization control byte shall indicate "no synchronization action"
  • the sc7-0 shall always be coded "XX001 IXX2", with X discretionary
  • the LEX byte shall carry AOC
  • the LEX byte shall be coded 00 lo
  • the scO may be set to 0 only in between transmissions of 5 concatenated and identical AOC messages
  • the format descnbed tor full overhead framing includes overhead to allow tor the synchronization of the seven ASx and LSx bearer channels
  • the ADSL equipment may operate in a reduced overhead mode This mode retains all the full overhead mode functions except synchronization control 3 4 4 3 1 Reduced overhead frammg with separate fast and svnc bvtes
  • the AEX and LEX bytes shall be eliminated from the ADSL frame format, and both the fast and sync bytes shall carry overhead information
  • the fast byte cames the fast buffer CRC, mdicator bits, and EOC messages, and the sync byte car ⁇ es the mterleaved buffer CRC and AOC message
  • the assignment of overhead functions to fast and sync bytes when usmg the full overhead frammg and when usmg the reduced overhead frammg with separate fast and svnc bvtes shall be as shown m Table 12
  • the structure of the fast data buffer shall be as shown m Figure 64 with Ap and Lp set to 0
  • the structure of the mterleaved data buffer shall be as shown m Figure 65 with A j and L set to 0
  • AOC function shall be earned in a smgle overhead byte assigned to separate data frames within the superframe structure
  • the CRC remains m frame 0 and the indicator bits m frames 1, 34, and 35
  • the AOC and EOC bytes are assigned to alternate pairs of frames
  • the assignment of overhead functions shall be as shown m Table 13
  • scramblers are applied to the serial data streams without reference to any framing or symbol synchronization. Descrambling in receivers can likewise be performed independent of symbol synchronization.
  • the ATU-C shall support downstream transmission with at least any combination of the FEC coding capabilities shown in Table 14.
  • the ATU-C shall also support upstream transmission with at least any combination of the FEC coding capabilities shown in Table 23. 3.4 6.1 Reed-Solomon coding
  • R i.e , R F or R
  • C(D) eg rfi- 1 + c l ⁇ fi- 2 + + c R _ 2 D + c R _ ⁇ is the check polynomial
  • the anthmetic is performed in the Galois Field GF(256), where a is a primitive element that satisfies the primitive bmary polynomial x + x ⁇ + x ⁇ + x ⁇ + l
  • a data byte (d , d ... , d j , d Q)
  • the ATU When entering the SHOWTIME state after completion of Initialization and Fast Retrain, the ATU shall align the first byte of the first Reed Solomon code-word with the first data byte of DF 0 3 4 6 3 Interleaving
  • the Reed-Solomon codewords in the mterleaved buffer shall be convolutionally interleaved
  • the interleaving depth vanes shall always be a power of 2 Convolutional interleaving is defined by the rule " Each of the N bytes BQ, B J , , B y_ j in a Reed-Solomon codeword is delaved by an amount that vanes linearly with the byte index More precisely, byte B, (with index l) is delayed by (D-l ) x i bytes, where D is the interleave depth"
  • N 5
  • the output bytes from the mterleaver always occupy distmct time slots when N is odd
  • a dummv byte shall be added at the beginning of the codeword at the mput to the mterleaver
  • the resultant odd-length code-word is then convolutionallv interleaved, and the dummv bvte shall then removed from the output ot the interleaver 3 4 6 4
  • Support of higher downstream bit rates with S 112
  • the ADSL downstream line rate is limited to approximately 8 Mbit/s per latency path
  • I e Ki
  • Ki Ki + R > 255
  • the Ki data bvtes shall be split mto two consecutive RS code-words
  • a DMT time-domain signal has a high peak-to-average ratio (PAR) (its amplitude distnbution is almost Gaussian), and large values may be clipped by the digital-to-analog converter
  • PAR peak-to-average ratio
  • the e ⁇ or signal caused by clipping can be considered as an additive negative impulse for the time sample that was clipped
  • the clippmg e ⁇ or power is almost equally dist ⁇ ubbed across all tones in the symbol in which clippmg occurs Clipping is therefore most likely to cause e ⁇ ors on those tones that, m anticipation of a higher received SNR, have been assigned the largest number of bits (and therefore have the densest constellations)
  • These occasional e ⁇ ors can be reliably co ⁇ ected by the FEC codmg if the tones with the largest number of bits have been assigned to the mterleave buffer
  • the numbers of bits and the relative gams to be used for every tone shall be calculated in the ATU-R receiver, and sent back to the ATU-C accordmg to a defined protocol
  • the pans of numbers are stored, m ascending order of frequency (or tone number l), in a bit and gam table
  • the "tone-ordered" encodmg shall first assign the S*N F bits from the fast data buffer to the tones with the smallest number of bits assigned to them, and then the 8' ⁇ 7 / bits from the interleave data buffer to the remaining tones
  • All tones shall be encoded with the number of bits assigned to them, one tone may therefore have a mixture of bits from the fast and mterleaved buffers
  • the ordered bit table b' t shall be based on the o ⁇ gmal bit table b, as follows
  • the last two 4-d ⁇ mens ⁇ onal symbols in the DMT symbol shall be chosen to force the convolutional encoder state to the zero state
  • the 2 LSBs of u are pre-determined, and onlv ( ⁇ +V -3) bits shall be extracted from the data frame buffer and shall be allocated to t , 3 4 8 2 Bit conversion
  • the bits ( « 3 , «2, « ] ) determine (VJ .VQ) and accordmg to Figure 69
  • the convolutional encoder shown in Figure 69 is a systematic encoder (I e i j and 2 are passed
  • the expanded constellation is labeled and partitioned mto subsets ("cosets") usmg a techmque called mappmg by set-partitionmg
  • cosets mto subsets
  • mappmg a techmque
  • the four-dimensional cosets m Wei's code can each be w ⁇ tten as the union of two Cartesian products of two 2-d ⁇ mens ⁇ onal cosets
  • C4 0 (C 2 " * C ' ) x (C- C j )
  • the four constituent 2- dimensional cosets, denoted by C 2 , C 2 , 2 , C 2 are shown in Figure 71
  • the encoding algonthm ensures that the 2 least significant bits of a constellation point compnse the mdex 1 of the 2- dimensional coset C 2 ' in which the constellation pomt lies
  • the bits (vj, VQ) and (wj, WQ) are m fact the binary representations of this mdex
  • the three bits (U2,UJ,UQ) are used to select one of the 8 possible four-dimensional cosets
  • the 8 cosets are labeled
  • Figure 72 shows the trellis diagram based on the fimte state machine in Figure 70, and the one-to-one co ⁇ espondence between (u , « j , U Q ) and the 4-d ⁇ mens ⁇ onal cosets
  • S (S2,S 2 .S l ,Sry represents the cu ⁇ ent state
  • T (T3)
  • T 2 , T j , TQ represents the next state m the finite state machine 5 is connected to T in the constellation diagram by a branch determined by the values of u and « j
  • the branch is labeled with the 4-d ⁇ mens ⁇ onal coset specified by the values of u 2 , MJ
  • the encoder shall select an odd-integer point ( ⁇ ' ⁇ ") from the square-gnd constellation based on the b bits ot either ⁇ v ⁇ . j ,v ⁇ . 2 , .VI .VQ ⁇ or ⁇ w j . ] ,w ⁇ . , ,W ] ,W Q ⁇
  • these b bits are identified with an integer label whose binary representation is ( ⁇ ,.
  • the integer values A and Y of the constellation point (X,Y) shall be determined from the b bits f fc-l' y b-2 ⁇ ' v l' v ⁇ ⁇ foll° ws ⁇ and Y are the odd mtegers with twos-complement binary representations (v ⁇ .j, v6 -3 , , vj , 1 ) and (vj,. , v£_4, ,vrj, 1 ), respectively
  • MSBs most significant bits
  • v ⁇ . j and v ⁇ are the sign bits for X and Y respectively
  • the 4-bit constellation can be obtamed from the 2-bit constellation by replacmg each label n by a 2 x 2 block of labels as shown m Figure 74
  • the same procedure can be used to construct the larger even-bit constellations recursively
  • the constellations obtamed for even values of b are square m shape
  • An algonthnuc constellation encoder shall be used to construct constellations with a maximum number of bits equal t0 downmax . where 8 ⁇ Ndow ⁇ max ⁇ 15 The constellation encoder shall not use trellis codmg with this option 3 4 9 1 Bit extraction Data bits from the frame data buffer shall be extracted accordmg to a re-ordered bit allocation table b , least sigmficant bit first The number of bits per tone, b',, can take any non-negative integer values not exceedmg ⁇ /downmax.
  • the frequency spacing, ⁇ " between sub-earners is 4 3125 kHz, with a tolerance of +/- 50 ppm 3 4 11 1 1 Data sub-earners
  • the lower limit of n depends on both the duplexing and service options selected For example, tor ADSL above POTS service option, if overlapped spectrum is used to separate downstream and upstream signals, then the lower limit on n is determmed by the POTS splitting filters, if frequency division multiplexing (FDM) is used, the lower limit is set by the downstream-upstream separation filters 3 4 11 1 2 Pilot
  • the data modulated onto the pilot sub-camer shall be a constant ⁇ 0,0 ⁇ Use of this pilot allows resolution of sample timing m a receiver modulo-8 samples Therefore a gross tinung e ⁇ or that is an integer multiple of 8 samples could still persist after a micro-interruption (e g , a temporary short- cucuit, open circuit or severe line hit), co ⁇ ection of such timmg e ⁇ ors is made possible by the use of the synchronization symbol 3 4 1 1 1 3 Nvquist frequency
  • the earner at the Nvquist trequencv (#256) shall not be used for user data and shall be real valued 3 4 1 1 1 4 DC
  • the earner at DC (#0) shall not be used, and shall contain no energy " 4 1 1 2 Modulation by the inverse discrete Founer transform (IDFT)
  • the constellation encoder and gam scalmg generate onlv 255 complex values of Z,
  • the mput values 255 complex values plus zero at DC and one real value for Nyquist if used
  • the synchronization symbol permits recovery of the frame boundary after micro-interruptions that might otherwise force retraining
  • the cyclic prefix shall, however, be shortened to 32 samples, and a synchronization symbol (with a nominal length of 544 samples) is inserted after every 68 data symbols That is,
  • the first pan of bits (d ⁇ and ⁇ / 2 ) shall be used for the DC and Nyquist sub-earners (the power assigned to them is zero, so the bits are effectively ignored)
  • the pe ⁇ od of the PRD is only 511 bits, so d512 shall be equal to dl
  • the dl - d9 shall be re-initialized for each synchronization symbol, so each symbol uses the same data Bits 129 and 130, which modulate the pilot earner, shall be overwritten bv ⁇ 0,0 ⁇ generatmg the ⁇ +,+ ⁇ constellation
  • the minimum set of sub-earners to be used is the set used for data transmission (l e , those for which b, > 0)
  • the data modulated onto each sub-earner shall be as defined above, it shall not depend on which sub-earners are used 3 4 12 Cyclic prefix
  • the transmitter mcludes all analog transmitter functions the D/A converter, the anti-aliasing filter, the hyb ⁇ d circuitn .and the high-pass part of the POTS or ISDN splitter 3 4 13 1 Maximum clippmg rate
  • the maximum output signal of the transmitter shall be such that the signal shall be clipped no more than 0 00001% of the time
  • the signal to noise plus distortion ratio of the transmitted signal in a given sub-earner is specified as the ratio of the rms value of the tone in that sub-earner to the rms sum of all the non-tone signals m the 4 3125 kHz frequency band centered on the sub-earner frequency This ratio is measured for each sub-earner used for transmission usmg a MultiTone Power Ratio (MTPR) test as shown m Figure 77
  • MTPR MultiTone Power Ratio
  • the MTPR of the transmitter m any sub-earner shall be no less than (3 ⁇ ' (]ow + 20)dB, where ⁇ owru ls defined as the size of the constellation (m bits) to be used on sub-earner i
  • the minimum transmitter MTPR shall be at least 38dB (co ⁇ espondmg to an ⁇ faowni o 6) for anv sub-earner
  • An ATU-R may support STM transmission or ATM transmission or both frammg modes that shall be supported, depend upon the ATU-R bemg configured for either STM or ATM transport If frammg mode k is supported, then modes k-1, , 0 shall also be supported
  • the ATU-C and ATU-R shall indicate a frammg mode number 0, 1, 2 or 3 which they mtend to use The lowest indicated frammg mode shall be used
  • An ATU-R may support reconstruction of a Network Tinung Reference (NTR) from the downstream mdicator bits 3 5 1 STM Transmission Protocol Specific functionalities 3 5 1 1 ATU-R input and output V interfaces for STM transport
  • the functional data interfaces at the ATU-R are shown m Figure 78
  • Output interfaces for the high-speed downstream sunplex b&arer channels are designated ASO through AS3
  • input-output interfaces for the duplex bearer channels are designated LSO through LS2
  • the sunplex channels are transported m the downstream dnection onlv, therefore their data interfaces at the ATU-R operate only as outputs 3 5 1 3 Duplex channels - Transceiver bit rates
  • duplex channels are transported in both directions, so the ATU-R shall provide both input and output data interfaces
  • An ATU-R configured for STM transport shall support the full overhead frammg structure 0
  • the support of full overhead frammg structure 1 and reduced overhead framing structures 2 and 3 is optional
  • the ATU-R mput and output T interfaces are identical to the ATU-C mput and output interfaces, as shown m Figure 79 3 5 2 2 ATM Cell specific functionalities
  • the ATM cell specific functionalities performed at the ATU-R shall be identical to the ATM cell specific functionalities performed at the ATU-C 3 5 2 3 Frammg Structure for ATM transport
  • An ATU-R configured for ATM transport shall support the full overhead framing structures 0 and 1
  • the ATU-R transmitter shall preserve T-R interface byte bounda ⁇ es (explicitly present or implied by ATM cell boundanes) at the U-R mterface, mdependent of the U-R mterface frammg structure
  • An ATU-R configured for ATM transport may support reconstruction of a Network Timmg Reference (NTR)
  • NTR Network Timmg Reference
  • NTR Network Timmg Reference
  • An STM ATU-C transporting ATM cells and not preserving V-C byte bounda ⁇ es at the U-C mterface shall indicate du ⁇ ng initialization that frame structure 0 is the highest frame structure supported
  • An STM ATU-C transporting ATM cells and preserving V-C byte bounda ⁇ es at the U-C mterface shall indicate dunng initialization that frame structure 0, 1, 2 or 3 is the highest frame structure supported, as applicable to the implementation, • An ATM ATU-R receiver operatmg m frammg structure 0 can not assume that the ATU-C transmitter will preserve
  • V-C mterface byte bounda ⁇ es at the U-C mterface and shall therefore perform the cell delineation bit-by-bit 3 5 3 Network timing reference
  • the ATU-R may deliver the 8 kHz signal to the T-R interface 3 5 4 Framing
  • ATU-R transmitter Frammg of the upstream signal (ATU-R transmitter) closelv follows the downstream framing (ATU-C transmitter), but with the following exceptions
  • frammg structures Two types of frammg are defined full overhead and reduced overhead Furthermore, two versions of full overhead and two versions of reduced overhead are defined
  • the four resultmg frammg structures are defined as for the ATU-C and are refened to as frammg structures 0, 1 , 2 and 3
  • the ATU-R transmitter is functionally similar to the ATU-C transmitter, except that up to three duplex data channels are synchronized to the 4 I Hz ADSL DMT symbol rate (uistead of up to four sunplex and three duplex channels as is the case for the ATU-C)
  • the ATU-R transmitter and its associated reference pomts for data fra mg are shown m Figure 55 and Figure 56
  • the superframe structure of the ATU-R transmitter is identical to that of the ATU-C transmitter, shown in Figure 61
  • the ATU-R shall support the mdicator bits
  • Each data frame shall be encoded mto a DMT symbol As specified for the ATU-C shown m Figure 61 , each frame is composed of a fast data buffer and an interleaved data buffer, and the frame structure has a different appearance at each of the reference points (A, B, and C)
  • the bytes of the fast data buffer shall be clocked into the constellation encoder first, followed by the bytes of the interleaved data buffer Bytes are clocked least sigmficant bit first
  • the assignment of bearer channels to the fast and interleaved buffers shall be configured dunng initialization with the exchange of a (B ⁇ i) pan for each data stream, where 5p designates the number of bytes of a given data stream to allocate to the fast buffer, and B j designates the nu ⁇ er of bytes allocated to the mterleaved data buffer
  • the frame structure of the fast data buffer is the same as that specified for the ATU-C with the following exceptions
  • ⁇ p FEC redundancv bvtes shall be added to the mux data frame (reference point A) to produce the FEC output data frame (reference point B) where R is given in the C-RATESl signal options received from the ATU-C dunng initialization Because the data from the fast data buffer is not mterleaved, the constellation encoder mput data frame (reference pomt C) is identical to the FEC output data frame (reference pomt B) 3 5 4 1 2 2 Interleaved data buffer
  • Each byte shall be clocked into the CRC least sigmficant bit first
  • the mput data streams shall be synchronized to the ADSL tinung base usmg the synchronization control mechamsm (consisting of synchronization control byte and the LEX byte) Forward-e ⁇ or-co ⁇ ection codmg shall always be applied to the synchronization control byte(s)
  • the synchronization control byte shall always indicate "no synchronization action" 3 5 4 2 1 Synchronization for the fast data buffer
  • Synchronization control for the fast data buffer can occur m frames 2 through 33 and 36 through 67 of an ADSL superframe, where the fast byte may be used as the synchronization control byte No synchronization action is to be taken for those frames in which the fast byte is used for CRC, fixed mdicator bits, or EOC
  • the format of the fast byte when used as synchronization control for the fast data buffer shall be as given m Table 21
  • bit timing base of the mput bearer channels (LSx) is synchronous with the ADSL modem tinung base then ADSL systems need not perform synchronization control by adding or deleting LEX bytes to/from the designated LSx channels
  • the synchronization control byte shall indicate "no synchronization action" (1 e , sc7-0 coded "00001 IXO 2 ", with X discretionary)
  • the LSO bearer channel shall be transported in the LEX byte, usmg the "add LEX bvte to designated LSx channel", with LSO as the designated channel, every other frame on average 3 5 4 2 2
  • Synchronization for the interleaved data buffer Synchronization control for the mterleaved data buffer can occur m frames 1 through 67 of an ADSL superframe, where the sync bvte mav be used as the synchronization control byte No synchronization action shall be taken du ⁇ ng frame 0, where the svnc bvte is used for CRC, and frames when the LEX bvte car ⁇ es the AOC
  • the format of the svnc byte when used as synchronization control for the interleaved data buffer shall be as given m Table 22 hi the case where no signals are allocated to the interleaved data buffer, the sync byte shall carry the AOC data directly, as shown in Figure 63 Table 22 - Svnc bvte format for synchronization
  • the LSO bearer channel shall be transported in the LEX bvte, usmg the "add LEX byte to designated LSx channel", with LSO as the designated channel, every other frame on average
  • the bit timmg base of the input bearer channels (LSx) is synchronous with the ADSL modem timing base then ADSL systems need not perform synchronization control by addmg or deleting LEX bytes to/from the designated LSx channels, and the synchronization control byte shall mdicate "no synchronization action"
  • the sc7-0 shall always be coded "00001 IXX 2 ", with X discretionary
  • the LEX byte shall carry AOC
  • the LEX byte shall be coded OO 1 6
  • the scO may be set to 0 only m between transmissions of 5 concatenated and identical AOC messages 3 5 4 3 Reduced overhead framing
  • the format desc ⁇ bed in 2 5 4 1 2 for full overhead frammg includes overhead to allow for the synchronization of three LSx bearer channels
  • the synchronization function descnbed m 2 5 4 2 is not requued
  • the ADSL equipment mav operate in a reduced overhead mode This mode retains all the full overhead mode functions except synchronization control
  • the frammg structure shall be as defined m 2 4 4 3 1 (when using separate fast and sync bytes) or2 4 4 3 2 (when using merged fast and sync bytes) 3 5 5 Scramblers
  • the data streams output from the fast and interleaved buffers shall be scrambled separately usmg the same algo ⁇ thm as for the downstream signal
  • the upstream data shall be Reed-Solomon coded and mterleaved using the same algo ⁇ thm as for the downstream data
  • the ATU-R shall support upstream transmission with at least anv combination of the FEC coding capabilities shown in Table 23 Table 23 - Mimmum FEC coding capabilities for ATU-R
  • the ATU-R shall also support upstream transmission with at least any combination of the FEC codmg capabilities shown m Table 14 3 5 7 Tone ordermg
  • the tone-ordering algo ⁇ thm shall be the same as for the downstream data 3 5 8 Constellation encoder - Trellis version
  • An algonth ⁇ uc constellation encoder shall be used to construct constellations with a maximum number of bits equal to ⁇ upm x, where 8 ⁇ jVupmax ⁇ 15
  • the encodmg algo ⁇ thm shall be the same as that used for downstream data (with the substitution of the constellation limit of Aiipmax r ⁇ downmax)
  • Frequency spacing, ⁇ / ⁇ , between sub-camers shall be 4 3125 kHz with a tolerance of +/- 50 ppm
  • the channel analysis signal allows for a maximum of 31 earners (at frequencies nAf) to be used
  • the range of n depends on the service option selected For example, for ADSL above POTS the lower limit is set bv the POTS/ADSL splitting filters, the upper limit is set bv the transmit and receive band-limiting filters, and shall be no greater than 31
  • the cut-off trequencies of these filters are at the discretion ot the manufacturer because the range ot usable n is determined dunng the channel estimation 3 5 11 1 2 Nvquist frequency
  • the sub-earner at the Nyquist frequency shall not be used for user data and shall be real valued 3 5 11 1 3 DC
  • the sub-earner at DC (#0) shall not be used, and shall contain no energy 3 5 11 2 Synchronization symbol
  • the cyclic prefix shall, however, be shortened to 4 samples, and a synchronization symbol (with a nominal length of 68 samples) inserted after every 68 data symbols That is,
  • the data modulated onto each sub-camer shall be as defined above, it shall not depend on which sub-earners are used 3 5 12 Transmitter dynamic range
  • the transmitter includes all analog transmitter functions the D/A converter, the anti-aliasing filter, the hyb ⁇ d circuitry, and the POTS splitter 3 5 12 1 Maximum clipping rate
  • the maximum output signal of the transmitter shall be such that the signal shall be clipped no more than 0 00001% of the time
  • the signal to noise plus distortion ratio of the transmitted signal m a given sub-camer ((S/N+D),) is specified as the ratio of the rms value of the full-amplitude tone m that sub-camer to the rms sum of all the non-tone signals in the 4 3125 kHz frequency band centered on the sub-camer frequency This ratio is measured for each sub-camer used for transmission usmg a Multi-Tone Power Ratio (MTPR) test as shown m Figure 77
  • the MTPR of the transmitter in any sub-camer shall be no less than (3 up ⁇ + 20) dB, where N U p ⁇ is defined as the size of the constellation (in bits) to be used on sub-camer /
  • the transmitter MTPR shall be +38dB (co ⁇ esponding to an ⁇ 7 U p 1 of 6) for any sub-camer
  • a PMCCC encoder is formed bv two (or more) constituent systematic encoders jomed through one or more interleavers
  • the mput mformation bits feed the first encoder and, after having been scrambled bv the mterleaver, enter the second encoder
  • a code word of a parallel concatenated code compnses of the mput bits to the first encoder followed by the pa ⁇ ty check bits of both encoders
  • the disadvantage of the PMCCC is that it has a floor-error around 10 " * This could be improved with a good mterleaver design, but usmg a large number of iterations 4 2 1- Parallel Multiple Convolutional Concatenated Codes Encoder
  • a PMCCC encoder compnses ot two parallel concatenated recursive systematic convolutional encoders separated by an mterleaver The encoders are arranged m a "parallel concatenation" In a prefe ⁇ ed embodiment, the concatenated recursive systematic convolutional encoders may be identical
  • Figure 82 represents the proposed encoder
  • the mput is a block of mformation bits
  • the two encoders generate pa ⁇ ty symbols (uo and u O) from two simple recursive convolutional codes
  • the key innovation of this techmque is an interleaver "r", which permutes the o ⁇ gmal information bits before mput to the second encoder
  • the permutation performed by the mterleaver allows those input sequences for which one encoder produces low-weight codewords to usually cause the other encoder to produce high-weight codewords
  • the combmation is surp ⁇ smgly powerful
  • the resultmg code has features similar to a "random" block code In this way, we have the information symbols ( «/ and u 2 ) and two redundant symbols (uo and u O) With this redundancy it is possible to reach longer loops and to reduce the PAR, at the cost of a slight mcrease of the constellation encoder
  • the first decoder should deliver a soft output to the second decoder
  • the loganthm of the Likelihood Ratio (LLR) of a bit decision is the soft decision mformation output by the MAP decoder
  • the optimum decision algonthm on the kth bit Uk is based on the conditional log-hkelihood ratio L k
  • L lk f(y,,Lo,L 2 ,k) (115)
  • the final decision is based on. which is passed through a hard h ⁇ uter with zero threshold
  • the recursion can be started with the initial condi ⁇ on
  • a PMCCC the terleaver establishes a relationship between portions of a codeword It is generally assumed that when a PMCCC decoder is operatmg at low bit e ⁇ or rates, e ⁇ or sequences have small Hamnung weights From this, and properties of PMCCC, a mathematical structure is possible to developed for interleaver design, pemuttmg the identification of quantitatively optimal mterleaver Simulations show the math captures some but not all the essential charactenstics of a successful interleaver Modifying a random mterleaver accordmg to some mathematical ideas gives excellent simulation results
  • the function of the interleaver in the PMCCC is to assure that at least one of the codeword components has high Hamming weight
  • An SMCCC Encoder compnses of two se ⁇ al concatenated recursive systematic convolutional encoders separated by an mterleaver
  • the encoders are a ⁇ anged in a "se ⁇ al concatenation"
  • the concatenated recursive systematic convolutional encoders are identical
  • Figure 87 represents the proposed encoder
  • a SMCCC encoder is a combmation of two simple encoders
  • the mput is a block of information bits
  • the two encoders generate pa ⁇ ty symbols (uo and u O) from two simple recursive convolutional codes
  • the key mnovation of this techmque is an mterleaver "r" , which permutes the o ⁇ gmal mformation bits before mput to the second encoder
  • the permutation allows those mput sequences for which one encoder produces low-weight codewords which will usually cause the other encoder to produce high-weight codewords
  • FIG 88 the block diagram of an iterative decoder is shown It is based on two modules denoted by "SISO" one for each encoder, an mterleaver, and a demterleaver
  • SISO single-port device, with two mputs and two outputs
  • the SISO module is a four-port device that accepts at the input the sequences of probability distnbutions and outputs the sequences of probability dist ⁇ butions based on its mputs and on its knowledge of the code
  • the output probability dist ⁇ butions represent a smoothed version of the mput dist ⁇ butions
  • the algo ⁇ thm is completely general and capable ot copmg with parallel edges and also with encoders with rates greater than one, like those encountered in some concatenated schemes
  • the SISO algo ⁇ thm requires that the whole sequence has been received before starting the smoothing process The reason is that backward recursion starts trom the final trellis state
  • a more flexible decoding strategy is offered bv modifvmg the algo ⁇ thm m such a way that the SISO module operates on a fixed memory span and outputs the smoothed probability distnbutions after a given delay, D
  • This new algonthm is called the shding-window soft-input soft-output (SW-SISO) algonthm
  • the SW-SISO algo ⁇ thm solves the problems of continuously updating the probability dist ⁇ butions, without requiring trellis temunations
  • Theu computaUonal complexity m some cases is around 5 tunes that of other suboptunal algonthms like SOVA This is due mainly to the fact that they are multiplicative algo ⁇ thms
  • we overcome this drawback by proposing the additive version of the SISO algonthm 4 3 3 Interleaver design
  • SMCCC does not have a problem with floor e ⁇ ors as does PMCCC
  • the floor e ⁇ or begins after 10" 7 that made it suitable for ADSL applications
  • the mterleaver establishes a relationship between portions of a codeword
  • p permutation length
  • the mterleaver establishes a relationship between portions of a code-word
  • the method proposed for the mterleaver is to disperse symbols as widely as possible in a "constellation way"
  • the number of iterations is a very important subject for the different applications of PMCCC and SMCCC For applications where the delay is not important, a large number is acceptable For real tune applications or for quasi-real tune applications it is important to use a number of iterations as low as possible maintaining the advantages of this techmque
  • the necessary number of iterations depends upon the Eb/N 0 ratio m the receiver In Figure 51, we present this relationship for the SMCCC case, we represent values of ⁇ V tone below 0 1 dB, for values around 2 dB it is sufficient to use a number below 10 iterations
  • the PMCCC has a floor-e ⁇ or around a BER ot 10 " °,
  • the reason for this is that the SMCCC functions m an mner and outer encoder structure, while the PMCCC functions as two parallel encoders
  • Figure 52 we present the floor e ⁇ or effect for PMCCC and that SMCCC does not show the floor e ⁇ or effect at least until BER of 10 '9 For simulation after 10 9 a lot of tune is required and it is not possible to give a simulation result
  • Figure 90 represents the proposed encoder
  • a PMCCC encoder is a combination ot two simple encoders
  • the mput is a block of information bits
  • the two encoders generate pa ⁇ ty symbols (uo and u'o) from two simple recursive convolutional codes
  • the kev innovauon of this techmque is an interleaver T", which permutes the ongmal information bits before mput to the second encoder
  • the permutation performed bv the mterleaver allows those input sequences for which one encoder produces low-weight codewords to usually cause the other encoder to produce high-weight codewords
  • the combination is surp ⁇ smgly powerful
  • the resulting code has features similar to a random ' block code
  • Low-density parity-check codes are codes specified by a matrix containing mostly 0's and only a small number of 1 's.
  • an (n, j, k) low-density code is a code of block length n with a matrix like that of Table 24 where each column contains a small fixed number, j, of l's and each row contains a small fixed number, k, of 1' s. Note that this type of matrix does not have the check digits appearing in diagonal form as in Table 25. However, for coding purposes, the equations represented by these matrices can always be solved to give the check digits as explicit sums of information digits.
  • the mmimum distance of a code is the number of positions m which the two nearest code words differ Over the ensemble, the mmimum distance of a member code is a random va ⁇ able, and it can be shown that the dist ⁇ bution function of this random va ⁇ able can be over bounded by a function As the block length increases, for fixed j ⁇ 3 and k > j, this funcUon approaches a unit step at a fixed fraction ⁇ , k of the block length Thus, for large n, practically all the codes m the ensemble have a minimum distance of at least n ⁇ ,k, In Table 26 this ratio of typical minimum distance to block length is compared to that for a pa ⁇ ty-check code chosen at random, l e , with a matnx filled in with equiprobable independent bmary digits It should be noted that for all the specific nonrandom procedures known for constructing codes, the ratio of minimum distance to block length appears to approach 0 with mcreasmg block length
  • the probability of e ⁇ or usmg maximum likelihood decodmg for low-density codes clearly depends upon the particular channel on which the code is bemg used The results are particularly simple for the case of the BSC, or bmary svmmetnc channel, which is a binary-input, binary-output, memoryless channel with a fixed probability of transition from either input to the opposite output
  • the low-density code has a probability of decodmg e ⁇ or that decreases exponentially with block length and that the exponent is the same as that for the optimum code of shghtlv higher rate as given m Table 27 Table 26.
  • Comparison of o) k the ratio of typical minimum distance to block len,gth for an (n. j. k) code, to ⁇ . the same ratio for an ordinary parity-check code of the same rate.
  • the BSC is an approximation to physical channels only when there is a receiver that makes decisions on the incoming signal on a bit-to-bit basis. Since the decoding procedure to be described later can actually use the channel a posteriori probabilities, and since a bit-by-bit decision throws away available information, we are actually interested in the probability of decoding e ⁇ or of a binary-input, continuous-output channel. If the noise affects the input symbols symmetrically, then this probability can again be bounded by an exponentially decreasing function of the block length, but the exponent is a rather complicated function of the channel and code.
  • This patent application includes a computer program listing containing 37 pages, and included as an appendix.
  • the program relates to a Reed-Solomon Encoder and Decoder and a PMCCC Encoder and Decoder for 2 parallel concatenated convolutional codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Error Detection And Correction (AREA)

Abstract

A method of forward error correction for communication systems, comprising the steps of producing a symbol stream by forward error coding of a data stream, modulating the symbol stream to produce a modulated signal, transmitting the modulated signal over a communication link, receiving the modulated signal, where the received modulated signal includes errors, demodulating the received signal which includes errors, decoding the demodulated signal by a plurality of convolutional decoders, and, regenerating the data stream and eliminating the errors. A method of peak power level reduction for communication systems utilizing a plurality of coders comprising the steps of producing a peak reduced signal by encoding the data stream by the plurality of coders, modulating the peak reduced signal, and, transmitting the modulated peak reduced signal. An apparatus is described for implementing the method.

Description

FORWARD ERROR CORRECTING SYSTEM WITH ENCODERS CONHGURED IN PARALLEL AND/OR SERIES
Tins non-provisional patent application claims the benefit under 35 U S C Section 1 19(e) of Umted States
Provisional Patent Application No, 60/094,629, filed on Julv 30,1 98, and Provisional Patent Application No, 60/098,394, filed on August 30,1998, and Provisional Patent Application No 60/133,390, filed on May 10, 1999, all of which are incorporated herein bv reference
Field ol the Invention The present invention relates to the use of forward error correction techniques m data transmission over wired and wireless sv stems using an optional Reed-Solomon encoder as an outer encoder and a multiple concatenated convolutional encoder (in seπal or parallel configuration) as an inner encoder A preferred embodiment of the invention pertains particularly to ADSL svstems. as a representative species of wired-based svstems
Background of the Invention The invention is based on use of a multiple concatenated convolutional encoder in seπal or in parallel configuration
In the seπal case it is called Seπal Multiple Concatenated Convolutional Code (SMCCC), and in the parallel case is called
Parallel Multiple Concatenated Convolutional Code (PMCCC) Tins gives an extra redundancv to the signal a way that improves the performance of the codification (increasing the coding gam)
Theory ol" Trellis Coding Modulation constellations of more than 2 points (such as Quadrature Amplitude Modulation (QAM), and Quaternary
Phase Shift Kevmg (QPSK)) are used to mcrease the bit rate at the cost ol smaller Euclidean distances (distance between adjacent points in a signal constellation) Coding techniques are used to decrease transmission errors, when transmitting over power-limited channels
Trellis Coding combines coding and modulation to improve bit error rate performance As m other forms of forward error correction, the basic idea behind Trellis Codmg is to introduce controlled redundancy in order to reduce channel error rates What sets Trellis Codes apart is that this technique introduces redundancy by doubling the number of signal points m the
QAM constellation (partitioning) BPSK and QPSK signal constellations are show in Figures 1 and 2
The actual (noisv) received signal will tend to be somewhere around the "correct" signal point The receiver chooses the signal point closest to the noisv received signal As more points are added to the signal constellation and the power is kept constant, the probability of error increases, because the Euclidean distance (distance between adjacent signal pomts) "d" is decreased and the receiver has a more difficult job making the coπect decision Thus it would make sense, that the Euclidean distance "d" dominates the probability of error expressions
Smce the power is the same for both constellations, the required energy is also the same The signal points in BPSK axe = 2 apart and d = 1 414 apart for QPSK The following expressions are for probabilm ot error ( lor BPSK)
Substituting lor V the above
Figure imgf000003_0001
er/c ! dE
(QPSK)
- jl o where erfc is the complementary error function
Trellis Codmg expands on this concept to increase the Euclidean path distance for a more thorough deπvation ol the probability of error With the exception of the constant factor in front of the complimentary error function, it should be noted that both error expressions depend on the signal spacing d and that the probability for QPSK errors is higher (not surpπsing since the signal spacing is smaller) Trellis coding enables us to recover from this increase in probability of error
M-QAM and PSK normally use a signal set ϊM=2k symbols in order to reduce the symbol rate by a factor of M Examples oϊM=4 QAM and QPSK are show in Figures 3 and 4 respectively
Doubling the number ol signal pomts m order to support two state Trellis Coding, we get the signal constellations for two state Trellis Codmg shown in Figures 5 and 6 Thus, Trellis Coding uses 2 *M possible symbols for the same factor-ot-M reduction of bandwidth (and each signal is still transmitted during the same signaling peπod)
Trellis Codmg provides controlled redundancy, by doubling the number of signal pomts In addition, Trellis codmg defines the way in which signal transitions are allowed to occur (Signal transitions that do not follow this scheme will be detected as errors) This is best explained using the Trellis Coded 8-PSK example The 8-PSK signal constellation is show in Figure 7, where we can see the individual signal pomts
Note that the signal point labels "0,1,2,3,4,5,6,7" do not correspond to die actual data being sent They are only convement ways to label the signal pomts and keep from clutteπng up the graphics
Without coding, the performance m 8-PSK depends on do (do=2 sin (71 8) =0 765), which coiresponds to a higher bit error rate than QPSK (dι=l 414) Bv using Trellis Coding, it is possible to improve the performance by restπctmg the wav m which signals are allowed to transition
First the states of the trellis are defined Lets label one state as "0426", and the other state as "1537" Each digit refers to one of four permitted signal pomts in the state (state points), with each state by itself representing a QPSK constellation, with each state's constellation being offset by 45 degrees from the other Figure 8 descπbes a two-state trellis, 8-PSK system If the system is in state "0426" only one of these four state pomts is used If a "0" or "4" is transmitted, the system remams m the same state If however, a "2" or "6" is transmitted the system switches to the "1537" state Now, only one of these four state points is used If a "3" or "7" is transmitted, the system remams in this state, otherwise if a " 1 " or a "5" is transmitted, it switches back to the "0426" state Again, note that the system in each symbol represents two bits, so that when switching states, the "QPSK constellation is shifted by 45 degrees" Assuming that all input signals are equally likely, all signal paths are traced out over tune Just as we had for non-
Trelhs codmg, the received signal includes noise and will tend to be located somewhere around the state pomts The receiver again has to make a decision based on which signal point is closest and a mistaken output state value will be chosen if the receiver made an incorrect decision
In order to illustrate error events, lets assume that the transmitter is sending continuous "7" symbols Figure 9 illustrates the possible error events In this case "5" followed by "6" is recened instead of the transmitted "7" - "7" sequence The Euclidean mean-squared distance lor this path is the sum ol the squares ol the distance of each interval (see Figure 7 for an illustration of the Euclidean distances and Figure 8 for the Trellis Diagram)
Jd 2(7,5) - d - (7,6) = df + dj = 2 - [ 2 sin -J = 1 608 where d( 7,5) and d( 7, 6) are the Euclidean distances between these two the signals "7" and "5", and "7" and "6" respectively Figure 10 show when case "1 " followed bv "6" is received instead ol the transmitted "7" - "7" sequence The
Euclidean distance (see Figure 7 tor an illustration of the Euclidean distances i tor this path d 2 .D + d 2(7,6) = fd do2 , + | , stn | f | = 1 608
Figure 11 show when case "5" followed by "2" is received instead of the transmitted "7" - "7" sequence The Euclidean distance (see Figure 7 for an illustration of the Euclidean distances and Figure 8 for the Trellis Diagram) for this path is
d2(7,5) + d2(7,2) =
Figure imgf000005_0001
Figure 12 show when case "1 " followed by "2" is received instead of the transmitted "7" - "7" sequence The Euclidean distance (see Figure 7 for an illustration of the Euclidean distances and Figure 8 for the Trellis Diagram) for this path is
d2(7,l) + d 2(7,2) = Jd? + d = J 2 + [2 sin = 2 33
The only remaining error event is the single interval "3" instead of "7" error event, which has a Euclidean distance of 2 (see Figure 7)
Because of then large Euclidean distance (2 33), the "5" - "2" and "1 " - "2" error events are least likely The "1 " - "6" and "5" - "6" error events are most likely because of their low Euclidean distance (1 608)
The minimum Euclidean distances for a trellis is the minimum tree Euclidean distance "dE" (simi to the minimum Iree distance m convolutional coding) For the above example dε=l 608
Since dε is a measure of the closest spacing between adjacent state points (and therefore also more likely to cause errors), it dictates the lower bound for probability of error for the entire Trellis in the following way
Figure imgf000005_0002
where, α (dε)ιs the number of error paths at distance dε In the 2 states trellis example, there are 2 error paths at a distance of dε Therefore the probability of error is r 1 608 I E
Pe ≥ erfc
We found the probability of error for regular (non Trellis coded) QPSK to be
, 1 404 [ Pe = erfc
The improvement of two state Trellis Coding over QPSK is therefore 1 608/1 414 or 1 1 dB
Tins is a low coding gain for the amount ol overhead required to handle Trellis coding One might ask, is there a way to increase the coding gain obtainable with Trellis Coding7 Tliere certainly is First, it is possible to increase the number of trellis states above 2, such as the four-state trellis shown in Figure 13 Note that the permitted state transitions are only drawn tor column "I" The same state transitions are gam permitted m column "II", "IH", and so on
Increasing the number of states, one also increases the Euclidean distances For example, in the above tour state trellis coding case, all error events have a Euclidean distance of more than 2 (smgle "4" error) Tins single " 4" error, is the onlv ' lowest" error event with minimum Euclidean path distance Therefore, the lower bound for 4 state Trellis Codmg is
Figure imgf000006_0001
Compaπng this equation to the equation for regular QPSK, we have a coding gain of 2 1 or 3 dB Table 1 illustrates further codmg gains that can be obtained bv using even more states in the trellis Table 1 Trellis Coding Gain vs Number of Trellis States
Figure imgf000006_0002
Another way to improve codmg gam m Trellis Codmg is to go to more than 2 dimensions
Summary of the Invention The present invention compnses forward error correction techniques in data transmission over wired systems using an optional Reed Solomon encoder as an outer encoder and a multiple concatenated convolutional encoder (MCCC) (m seπal or parallel configuration) as an inner encoder With an "Optional" Reed-Solomon outer encoder we mean that it could be present or not We descπbe its application to ADSL DMT (Discrete Multi-Tone, multiple-camer) based systems The extension to CAP/QAM (single-earner) based, other xDSL systems (HDSL, VDSL, HDSL2, etc ), other wired commumcation systems, wireless systems and satellite systems is straightforward ADSL modems are designed to operate between a Central Office CO (or a similar point of presence) and a customer premises CPE As such they use existmg telephone network wiring between the CO and the CPE There are several modems in this class which function in generally similar manner All of these modems transmit their signals usuallv above the voice band As such, they are dependent on adequate frequency response above voice band
With the technique that we propose in this invention, it is possible to reach longer loops or reduce the transmitter power for ADSL systems
For wireless systems it is possible to reduce the power consumption, increase the coverage area and to extend the life of the portable systems
For satellite systems it is possible to increase the G/T factor around 4 dB, to increase the life of the satellite, to increase the coverage area and to reduce the requirements of the terrestπal systems With the use of the Trellis Coded Modulation (TCM) it is possible to obtain coding gams between 3 and 6 dB
(depending on the dimension of the trellis) Using the technique that we propose, the performance of this technique is within 1 dB from the Shannon limit, at a bit error probability of 10 7
MCCC achieves near-Shannon-limit eπor correction perloπnance We have done some simulations that show bit error probabilities as low as 10 5 at Eι/No=0 6 dB PMCCC yield very large coding gams (around 10 or 1 1 dB) hi the PMCCC case, after this value of 10 the role of the interleaver is very cπtical and to avoid the floor-error it is necessary to make a good design of the interleaver hi our design the Reed-Solomon outer encoder help to take care of this floor-error lower than
107 In this invention, we present simulation results, and we compare the Reed-Solomon encoder (for R=8 and R=16) with the Trelhs plus Reed-Solomon (T+R=8 and T+R=16) and the two PCCC plus Reed-Solomon (TC+R=8 and TC+R=16) In all this cases will not take mto account the pavload of the Reed-Solomon code, because will have the same effect m all codmg techniques These results, we present, are for Gaussian noise In manv wire-line svstems, broadband unpulse noise is also a significant transmission impairment Although we have not modeled impulse noise effects in this analysis, in DMT systems, unpulse noise whose duration is short compared to the frame size appears to be rather Gaussian-like, smce it passes through a DFT m the receiver Furthermore, because the noise is broadband, the noise energy in the signal band is distπbuted among the vaπous frequency bms Thus the additional immunity against additive white Gaussian noise provided by the trellis code should be beneficial for unpulse noise as well
We present an encoder, decoders and some simulation results
Bnef Descπption of the Several Views of the Drawings Figure 1 shows a BPSK signal constellation, Figure 2 shows a QPSK signal constellation, Figure 3 shows a QAM signal constellation with M=4.
Figure 4 shows a QPSK signal constellation.
Figure 5 shows a QAM signal constellation with M=8 (Used with two state Trellis), Figure 6 shows a PSK signal constellation with M=8 (Used with two state Trellis), Figure 7 shows a signal constellation with M=8 (Used with two state Trellis), Figure 8 shows a two-state Trellis 8-PSK system,
Figure 9 shows an error Event "5" → "6" in a 2 states Trellis encoding, Figure 10 shows an error Event " 1" → '6" in a 2 states Trellis encoding, Figure 11 shows an error Event "5" → "2" in a 2 states Trellis encoding, Figure 12 shows an error Event "1" -> "2" in a 2 states Trellis encoding, Figure 13 shows a four-state Trellis, 8-PSK system,
Figure 14 shows a seπal Concatenated (n,k,N) block code,
Figure 15 shows the action ol a uniform mterleaver of length 4 on sequences of weight 2, Figure 16 shows a senally Concatenated (n,k,N) Convolutional code, Figure 17 shows a code sequence in A uy Figure 18 shows an analytical bounds for SMCBC 1 lor N = 4, 40, 400 and 4000,
Figure 19 shows an analytical bounds for SMCBC2 for N = 5, 50, 500 and 5000, Figure 20 shows an analytical bounds for SMCBC3 for N = 7, 70, 700 and 7000, Figure 21 shows an analytical bounds for SMCCC 1 for N = 200, 400, 600, 800, 1000 and 2000, Figure 22 shows an analytical bounds for SMCCC2 for N=200, 400, 600, 800, 1000, 2000, Figure 23 shows an analytical bounds for SMCCC3 for N=200, 400, 600, 800, 1000, 2000,
Figure 24 shows an analytical bounds for SMCCC4, Figure 25 shows a PMCCC, Figure 26 shows a transmission svstem structure, Figure 27 shows notations in a transmission svstem structure Figure 28 shows a PMCCC of three convolutional codes
Figure 29 shows a signal flow graph lor extrinsic information
Figure 30 shows an iterative decoder structure tor three parallel concatenated codes
Figure 31 shows an iterative decoder structure tor two parallel concatenated codes Figure 32 shows a convergence of turbo coding bit-error probability versus number of iterations for vaπous Ei No using the SW2-BCJR algoπthm,
Figure 33 shows a convergence of turbo coding bit-error probability versus number of iterations for vaπous Ei No usmg the SWAL2-BCJR algoπthm, Figure 34 shows a bit-error probability as a function of the bit signal-to-noise ratio using the SW2-BCJR and
SWAL2-BC JR algoπthms with five iterations,
Figure 35 shows a number of iterations to achieve several bit-error probabilities as a function of the bit signal-to-noise ratio using the SWAL2-BCJR algoπthm,
Figure 36 shows a Number of iterations to achieve several bit-error probabilities as a function of the bit signal-to-noise ratio usmg the SW2-BC JR algoπthm,
Figure 37 shows a basic structure for backward computation m the log-BCJR MAP algoπthm,
Figure 38 shows a Trellis Termination,
Figure 39 shows an example where a block interleaver fails to "break" the mput sequence,
Figure 40 shows the two PMCCC performance, r = ' , Figure 41 shows performance with short block sizes,
Figure 42 shows three-code performance,
Figure 43 shows a compaπson of SMCBC and PMCBC with vaπous interleaver lengths chosen so as to yield the same mput decoding delay,
Figure 44 shows a compaπson of SMCCC and PMCCC with four-state MCCs, Figure 45 shows Block diagram of a parallel concatenated convolutional code (PMCCC) (a) a PMCCC rate =1/3 (b) iterative decoding of a PMCCC,
Figure 46 shows Block diagram of a seπal concatenated convolutional code (SMCCC) (a) an SMCCC rate =1/3 (b) iterative decoding of an SMCCC,
Figure 47 shows a trellis encoder, Figure 48 shows an edge of the trellis section,
Figure 49 shows the soft-input soft-output (SISO) model.
Figure 50 shows the convergence of PMCCC-decodmg bit error probability versus the number of iterations usmg the ASW-SISO algoπthm.
Figure 51 shows the convergence of iterative decoding tor a seπal concatenated code bit eπor rate probability versus number of iterations using the ASW-SISO algoπthm,
Figure 52 shows a compaπson of two rate 1/3 PMCCC and SMCCC The curves refer to six and nine iterations of the decoding algoπtlim and to an equal input decoding delay of 16,384.
Figure 53 shows a block diagram for a modem transmitter m accordance with this invention, for the Central Office and for STM tr<ansport, Figure 54 shows a block diagram for a modem transmitter in accordance with this invention, for the Central Office and for ATM transport.
Figure 55 shows a block diagram for a modem transmitter in accordance with this invention, for the Remote modem and for STM transport.
Figure 56 shows a block diagram for a modem transmitter in accordance with this invention, lor the Remote modem and for ATM transport.
Figure 57 shows an ATU-C functional interfaces tor STM transport at the V-C reference point.
Figure 58 shows an ATU-C functional interfaces to the ATM laver at the V-C reference point.
Figure 59 shows an ATM cell delineation state macnme Figure 69 shows an Example implementation of the Admeasurement
Figure 61 shows an ADSL superframe structure - ATU-C transmitter.
Figure 62 shows a fast synchronization byte ("fast byte") format - ATU-C transmitter.
Figure 63 shows an interleaved synchronization byte ("sync bvte") tormat - ATU-C transmitter. Figure 64 shows a fast data buffer - ATU-C transmitter,
Figure 65 shows an mterleaved data buffer, ATU-C transmitter.
Figure 66 shows a scrambler,
Figure 67 shows a tone ordering and bit extraction example (without trellis codmg),
Figure 68 shows a tone ordering and bit extraction example (with trellis coding), Figure 69 shows a conversion of u to v and w,
Figure 70 shows a finite state machine for Wei's encoder,
Figure 71 shows a convolutional Encoder,
Figure 72 shows a trellis diagram,
Figure 73 shows a constellation labels for b = 2 and b = 4, Figure 74 shows an expansion of point n mto the next larger square constellation.
Figure 75 shows a constellation labels for b = 3,
Figure 76 shows a constellation labels for b = 5,
Figure 77 shows a MTPR test,
Figure 78 shows ∞ ATU-R functional interfaces for STM transport at the T-R reference point, Figure 79 shows an ATU-R functional interfaces to the ATM layer at the T-R reference point,
Figure 80 shows a fast data buffer - ATU-R transmitter,
Figure 81 shows an interleaved data buffer - ATU-R fransmitter,
Figure 82 shows two parallel Concatenated convolutional Encoder,
Figure 83 shows a conversion of u to v and w m the PMCCC encoder, Figure 84 shows a decoder for PMCCC,
Figure 85 shows the convergence of "constellation" mterleaver for PMCCC,
Figure 86 shows an interleaver for PMCCC,
Figure 87 shows a Seπal Convolutional Concatenated Encoder,
Figure 88 shows a decoder for SMCCC, Figure 89 shows an interleaver for SMCCC,
Figure 90 shows the Convolutional Concatenated Encoder used for simulations.
Figure 91 shows simulations tor PMCCC, and,
Figure 92 shows the Convolutional encoder uses for simulations
Detailed Descπption of the Preferred Embodiment We hereby incorporate by reference the following references Rauschmayer, Dennis J "ADSL/VDSL Principles", Macmillan Technical Publishing, 1999 ITU G 992 1 "ADSL Transceivers ', ITU, 1999 ITU 1432 "B-ISDN user-network interface-physical laver specification , ITU , 1993 Benedetto, Divsalar, Montorsi and F Pollara, 'Seπal Concatenation of Interleaved Codes Performance Analvsis, Design, and Iterative Decoding", The Telecommunications and Data Acquisition Progress Report 42-126 Jet
Propulsion Laboratory Pasadena, California, pp 1-26, August 15, 1996 5 Benedetto, Divsalar, Montorsi and F Pollara, A Soft-Output M.axιmum A Posteπoπ (MAP) Module to decode parallel and Seπal Concatenated Codes", The Telecommunications and Data Acquisition Progress Report 42-127, Jet Propulsion Laboratory, Pasadena, California, pp 1-20, November 15, 1996
6 L R Bahl, J Cocke, F Jelinek, and J Raviv, Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate," IEEE Transactions on Information Theory, pp 284-287, March 1974
7 Divsalar and F Pollara, "Turbo Codes for PCS Applications", Proceedings of ICC'95, Seattle, Washington, pp 54-59, June 1995
8 D Divsalar and F Pollara, "Multiple Turbo Codes", Proceedings of IEEE MILCOM95, San Diego, California, November 5-8, 1995 9 D Divsalar and F Pollara, "Soft-Output Decodmg Algoπthms in iterative Decoding of Turbo Codes," The
Telecommunications and Data Acquisition Progress Report 42-124, Jet Propulsion Laboratory, Pasadena, California, pp 63-87, February 15, 1995 1 Performance Analysis, design and iterative decoding of* SMCCC and PMCCC 1 1 Analytical Bounds to the Performance of Senallv Multiple Concatenated Codes 1 1 1 Senallv Multiple Concatenated Block Codes (SMCBC)
The scheme of two senallv concatenated block codes is shown m Figure 14 It is composed of two cascaded CCs, the outer (N,k) code C0 with rate R°c = k/N and the inner (n, N) code C, with rate R'c = N/n, linked bv <an interleaver of length N The overall SMCBC is then an (n, k) code, and we will refer to it as the (n, k, N) code Cs , including also the mterleaver length In the following, we will deπve an upper bound to the ML performance of the overall code Cs We assume that the CCs are linear, so that the SMCBC also is lmear and the umform error property applies, I e , the bit-eπor probability can be evaluated assuming that the all-zero codeword has been transmitted
A crucial step m the analysis compπses of replacmg the actual mterleaver that performs a permutation of the N mput bits with an abstract mterleaver called "umform mterleaver" This abstract mterleaver is defined as a probabilistic device that maps a given mput word of weight / into all distinct permutations of it, with equal probability p =l/\ (see Figure 15) The output- word of the outer code and the mput word of the inner code share the same weight Use of the uniform mterleaver permits the computation of the "average" performance of SMCBCs, mtended as the expectation of the performance of SMCBCs usmg the same MCCs, taken over the ensemble of all rnterleavers of a given length It can be proof the meaningfulness of the average performance, in the sense that there will always be, for each value of the signal-to-noise ratio, at least one particular interleaver yielding performance better than or equal to those of the uniform interleaver Let us define the input-output weight enumerating function (IOWEF) of the SMCBC Cs as
Ac,(W,H) = ∑ AZ' WW Hh (1) wh where A^,'h i the number of codewords of the SMCBC with weight h associated with an input word of weight w We also define the conditional weight enumerating function (CWEF) Ac'(w, H) of the SMCBC as the weight distπbution of codewords of the SMCBC that have input word weight w It is related to the IOWEF by
Figure imgf000010_0001
With knowledge of the CWEF, an upper bound to the bit-error probability of the SMCBC can be obtained m the form
Pb(e) ≤ I Σ Jc'Λι H) \ ( 3 )
44 = / A J H „=_e. & t ) where R c = k/n is the rate of Cs , and E b /NB is the signal-to-noise ratio per bit
The problem compnses in the evaluation of the CWEF of the SMCBC from the knowledge of the CWEFs of the outer and mner codes, which we call Ac°(w, L) and ACifl, H) To do this, we exploit the properties of the uniform mterleaver,
(N \ which transforms a codeword ol weight / at the output of the outer encoder into all its distmct permutations As a
consequence, each codeword of the outer code C„ of weight /, through the action of the umform mterleaver, enters the mner encoder generatmg ( codewords of the inner code C, Thus, the number Aw'h of codewords of the SMCBC of weight h associated with an input word of weight w is given by
Figure imgf000011_0001
From Equation (4), we deπve the expressions of the IOWEF and CWEF of the SMCBC
N Λc°, x 4C' (L H) Ac°(w,H) = ∑ A »> . ; ( ' (5)
,,
N Ac° (W ) x AC' (IH)
AC°(W, H) = ∑ — (6) ι=o I N )
V where Ac° (W< I) is the conditional weight distπbution of the input words that generate codewords of the outer code of weight
/.
1 1 2 Senallv Multiple Concatenated Convolutional Codes The structure of a senally multiple concatenated convolutional code (SMCCC) is shown m Figure 16 It refers to the case of two convolutional CCs, with the outer code C« with rate R°c = k/p, and the inner code C, with rate R'c = p/ , joined by an mterleaver of length Λ' bits hi this way they generatmg an SMCCC C? with rate Rc = k/n Note that N must be an integer multiple of p We assume, as before, that the convolutional CCs is linear, so that the SMCCC is linear as well, and the umform error property applies The exact analysis requires the use of a hvpertrellis having as hyperstates pairs of states of outer and mner codes The hyperstates 5 , _, and 5 / m are joined by a hyperbranch that compnses of all pairs of paths with length N/p that jom states s, and si of the inner code and states JΓ, and sm of the outer code, respectively Each hyperbranch is thus an equivalent SMCBC labeled with an IOWEF that can be evaluated as explained m the previous subsection From the hvpertrellis, the upper bound to the bit-error probability can be obtained through the standard transfer function technique employed for convolutional codes 1 2 Design of Senallv Multiple Concatenated Codes
For practical applications, SMCCCs are to be preferred to SMCBCs One reason is that maximum a posteπoπ algoπthms are less complex for convolutional than for block codes, a second is that the mterleaver ga can be greater for convolutional CCs, provided thev are suitably designed Hence, we deal mainlv with the design of SMCCCs, extending our conclusions to SMCBCs when appropπate Consider the SMCCC depicted in Figure 16 Its performance can be approximated bv that of an equivalent block code whose IOWEF labels the branch of the hypertrel s joining the zero states of the outer and inner codes Denoting bv ΛCs (w , H) the CWEF ot this equivalent block code, we can revvπte the upper bound, Equation (3 ), as (subscπpt m will denote minimum, and a subscπpt At will denote maximum) Pb(e) < Aw e Mo (7)
Figure imgf000012_0001
where w°m is the minimum weight of an input sequence generatmg an error event of the outer code, and hm is the minimum weight (since the mput sequences of the inner code are not unconstrained mdependent identically distπbuted (l I d ) binary sequences but, mstead, codewords of the outer code, h„ can be greater than the inner code free distance, df ) of the codewords of Cs By "error event of a convolutional code" we mean a sequence diverging from the zero state at time zero and remerging mto the zero state at some discrete tιme7 > 0 For constituent block codes, an error event is simply a codeword
The coefficients A^,'ι, of the equivalent block code can be obtained from Equation (4) once the quantities Awmd
A of the CCs are known To evaluate them, consider a rate R=p/n convolutional code C with memory v, and its equivalent
(N/R N - pv) block code whose codewords are all sequences of length N/R bits of the convolutional code starting from and endmg at the zero state By definition, the codewords of the equivalent block code are concatenations of error events of the convolutional codes Let
A(l, H,j) = ∑ A lkj Hh (11 ) h be the weight enumerating function of sequences of the convolutional code that concatenate j error events with total mput weight / (see Figure 17), where A ihj is the number of sequences of weight h, mput weight /, and number of concatenated error events 7 For N much larger than the memory of the convolutional code, the coefficient A^h °f me equivalent block code can be approximated (this assumption permits neglecting the length of eπor events compared to N , which also assumes that the
number of ways j mput sequences producing 7 error events can be arranged m a register of length N is The ratio N/p
Figure imgf000012_0002
deπves from the fact that the code has rate p/n, and thus N bits corresponds to N/p mput words or, equivalently, trellis steps) by
Figure imgf000012_0003
where ΠM , the largest number of eπor events concatenated in a codeword of weight /; and generated by a weight / mput sequence, is a function of Λ and / that depends on the encoder Let us return now to the block code equivalent to the SMCCC Usmg the previous result of Equation ( 12) withy = n' for the inner code, and the analogous one, j=n" , for the outer code (superscπpts o and / will refer to quantities pertaining to outer and inner code, respectively),
Figure imgf000012_0004
and substituting them into Equation (4), we obtain the coefficient ^'h of the senallv concatenated block code equivalent to the SMCCC m the form
Figure imgf000012_0005
where d°/is the free distance of the outer code By free distance d/ we mean the minimum Hamming weight of error events for convolutional CCs and the mmimum Flamming weight of codewords for block CCs We are interested in large mterleaver lengths and thus use for the binomial coefficient the asymptotic approximation
Figure imgf000013_0001
Substitution of this approximation m Equation (14) yields
N n< n'» V
ACΛ ~ ∑ ∑ ∑ A l=d° n°=U=l win' Ihj (15)
P n I '" Λ'
Finally, substituting Equation (15) into Equation (10) gives the bit-error probability bound in the form
Figure imgf000013_0002
Using Expression (16) as the starting point, we will obtain some important design considerations The bound, Expression (16), to the bit-error probability is obtamed by adding terms of the first summation with respect to the SMCCC weights h The coefficients of the exponential in h depend, among other parameters, on N For large N , and for a given h, the dominant coefficient of the exponential in h is the one for which the exponent of N is maximum Define this maximum exponent as (h) max wl { »" I } (17)
Evaluating afli) in general is not possible without specifying the CCs Thus, we will consider two important cases for which general expressions can be found 1 2 1 The Exponent of Λ/ for the Mmimum Weight
For large values oiEi/No , the performance of the SMCCC is dominated by the first term of the summation m A, correspondmg to the mmimum value h = hm Remembermg that, by definition, n'M and n°M are the maximum number of concatenated error events in codewords of the mner and outer code of weights hm and /, respectively, the following inequalities hold true
iM ( 18) d'r
nM ≤ (19) d°r and
Figure imgf000013_0003
Figure imgf000013_0004
Figure imgf000013_0005
where /„ (hm ) is the minimum weight / of codewords of the outer code yielding a codeword of weight h„ of the inner code, and / x J means 'integer part of x"(floor value) In most cases, lm(hm) < 2 df° and hm < 2 , so that n t =n°M = 1 and Equation (20) becomes α(h = 1 - UfimJ ≤ J - d°j (21 ) The result, Equation (21 ), shows that the exponent of N coπespondmg to the mmimum weight of SMCCC codewords is always negative for 2 ≤ df , thus vieldmg an mterleaver gam at high Ei No Substitution of the exponent a (h mto Expression (16) truncated to the first term of the summation m h yields hm P„(e) _ ≤ Bm Nl-*t e ~f^-) (22)
Eb . „
-V. where the constant B„ is
Figure imgf000014_0001
and Wm is the set of mput weights w that generates codewords of the outer code with weight (hm ) Expression (22) suggests the following conclusions
1 For the values of Eι Noand N where the SMCCC performance is dominated by its free distance dc,s = hm , increasing the mterleaver length yields a gain in performance
2 To mcrease the mterleaver gam, one should choose an outer code with a large df
3 To improve the performance with Ei/No, one should choose an mner and outer code combmation such that h„ is large These conclusions do not depend on the structure of the CCs, and thus they apply for both recursive and nonrecursive encoders However, for a given Ei No, there seems to be a mmimum value of N that forces the bound to diverge In other words, there seem to be coefficients of the exponents in h, for h > h „, that mcrease with N To investigate this phenomenon, we will evaluate the largest exponent of N, defined as aM = nτax{ a(h) } = m∑ιx{ n° + «' - / - / } (23)
This exponent will permit one to find the dommmt contπbution to the bit-error probability for N→o 1 2 2 The Maximum Exponent of A"
We need to treat the cases of nonrecursive and recursive inner encoders separately As we will see, nonrecursive encoders and block encoders show the same behavior 1 2 2 1 Block and Nonrecursive Convolutional Inner Encoders
Consider the inner code and its impact on the exponent of N in Equation (23) For a nonrecursive inner encoder, e have n ι = / In fact, every input sequence with weight 1 generates a fimte- weight error event, so that an input sequence with weight / will generate, at most, / error events coπespondmg to the concatenation of / eπor events of mput weight 1 Smce the umform interleaver generates all possible permutations of its input sequences, this event will certainly occur Thus, from Equation (23) we have
CCM =M - 1 ≥ 0 and interleaving gain is not allowed Tins conclusion holds true for both SMCCC employing a nonrecursive inner encoder and for all SMCBCs, smce block codes have codewords correspondmg to input words with weight equal to 1 For those SMCCCs, we alwavs have, for some /;, coefficients of the exponential in h of Expression ( 16) that increase with N, and this explains the divergence of the bound aπsing, lor each Eh No, when the coefficients increasing with V become dominant 1 2 2 2 Recursive Inner Encoders
For recursive convolutional encoders, the mmimum weight of mput sequences generating eπor events is 2 As a
/ consequence, an mput sequence of weight / can generate at most eπor events
Assuming that the inner encoder of the SMCCC is recursive, the maximum exponent of N in Equation (23) becomes
I l + l
CtM max < / - ; = max \ iu ~ 1 (24) wl .2. wl
The maximization mvolves / and w, smce Π° depends on both quantities In fact, remembenng the definition of nl as the maximum number of concatenated eπor events of codewords of the outer code with weight / generated by mput words of weight w, it is straightforward, as m Equation (19), to obtain
Figure imgf000015_0001
Substituting now the last inequality, Equation (25), into Equation (24) yields
aM max - 1 (26) i
Figure imgf000015_0004
To perform the maximization of the πght-hand side (RHS) of Expression (26), consider first the case of
/ = q df° where q is an integer, so that
Figure imgf000015_0002
The RHS of Expression (27) is maximized, for 2 < df , by choosing q = 1 On the other hand, for
Figure imgf000015_0003
the most favorable case is / = qdf , which leads us agam to the previously discussed situation Thus, the maximization requires I = df For this value, on the other hand, we have, from Equation (25), °M ≤ and the inequality becomes an equality if weWf , where Wf is the set of input weights vi that generates codewords of the outer code with weight / = d°r i conclusion, the largest exponent of n is given by dj° + l M (28)
The value of OCM in Equation (28) shows that the exponents of Λ" m Expression ( 16) are alwavs negative integers Thus, tor all /;, the coefficients of the exponents m h decrease with A7 , and we always have an mterleaver gam Denotmg by df' e r the mimmum weight of codewords of the mner code generated bv weight-2 mput sequences, we obtain a different weight II(OM) for even and odd values of df For even d , the weight h(c , associated to the highest exponent of Λ7 is given by d°, d'feβ »(aMJ - : Smce it is the weight of an mner codeword that concatenates df /2 eπor events with weight d'r »«■ Substituting the exponent OM mto Expression (16), approximated by only the term of the summation m h correspondmg to h=h(aM), yields
where
Be (29b)
Figure imgf000016_0001
In Equation (29b), wMjis the maximum mput weight yielding outer codewords with weight equal to d°, , ∞d A' is the number of such codewords
For df odd, the value of At is given by
Figure imgf000016_0002
where h'm ls e minimum eight of sequences of the mner code generated bv a weιght-3 input sequence In this case, m tact, df° - 1 we have Π'M = concatenated eπor events, of which n'M - 1 are generated bv weight-2 mput sequences and one is generated by a weιght-3 input sequence
Thus, substituting the exponent OM mto Expression (16) approximated by keepmg only the term of the summation in h coπespondmg to h = h(au) yields d°rι hm Pb(e) ~≤ Bodd N~ e ' <d°r ' + M EL
No (31) where
Figure imgf000016_0003
In cases of df both even and odd, we can draw from Expressions (29) and (31 ) a few important design considerations, as follows
(1 ) In contrast with the case of block codes and nonrecursive convolutional inner encoders, the use of a recursive convolutional inner encoder always yields an interleaver gam As a consequence, the first design rule states that the mner encoder must be a convolutional recursive encoder
(2) The coefficient II(OM) that multiplies the signal-to-noise ratio Ei No in Expression (16) increases for increasing values of df' eff Thus, e deduce that the effective free distance of the inner code must be maximized Both this and the previous design rule also had been stated for PMCCCs As a consequence, the recursive convolutional encoders optimized for use in PMCCCs can be employed altogether as inner CC in SMCCCs When d is odd, for special cases it is possible to increase h((%,<) and hm further bv choosing the feedback polynomial of the inner code to have a factor (1 +D), vieldmg h'm = °° Note mat ^eτe are ouler feedback polynomials such as (1 + D + D2 + D3 D 4) or (1 + D + Lr + D3 + D4 + ' - D6) yielding h'm' = ∞ (3) The mterleaver gam is equal to iV j for even values of df and to N 2 !or odd values of df As a consequence, we should choose, compatibly with the desired rate Rc of the SMCCC, an outer code with a large and, possibly, an odd value of the free distance
(4) As to other outer code parameters, N°f and wuj should be minimized In other words, we should have the mimmum number of input sequences generating free distance eπor events of the outer code, and their mput weights should be minimized Smce nonrecursive encoders have error events with w = 1 and, m general, less mput errors associated with eπor events at free distance, it can be convenient to choose as an outer code a nonrecursive encoder with minimum /V - and WΛ/
1 2 2 3 Examples Confirming the Design Rules
To confirm the design rules obtamed asvmptotιcally,(ι e , for large signal-to-noise ratios and large mterleaver lengths N) we evaluate the upper bound, Expression (16), to the bit-eπor probability for several block and convolutional SMCCs with different interleaver lengths, and compare their performances with those predicted by the design guidelines 1 2 2 3 1 Senallv Multiple Concatenated Block Codes
We consider three different SMCBCs obtamed as follows a) The first is the (7m, 3m, N) SMCBC, b) The second is a (15m, 4m, N) SMCBC usmg as outer code a (5, 4) paπtv-check code and as inner code a (15, 5) Bose-Chaudhuπ-Hocquenghem (BCH) code, c) The third is a (15m, 4m, N) SMCBC using as outer code a (7, 4) Hamming code and as inner code a (15, 7) BCH code
Note that the second and third SMCBCs have the same rate, 4/15 The outer, mner, and SMCBC code parameters introduced m the design analysis are listed in Table 2
In Figures 18, 19 and 20, we plot the bit-eπor probability bounds for SMCBCs 1 , 2 and 3 of Table 2 Code SMCBC 1 has d°f = 2, thus, from Equation (21 ), we expect an mterleaver gam gomg as N-l This is confirmed by the curves of Figure 18, which, for a fixed and sufficiently large signal-to-noise ratio, show a decrease in Pb (e) of a factor of 10 when N passes from 4 to 40, from 40 to 400, and from 400 to 4000 Moreover, from Expression (22), we expect, in each curve tor In Pb (e), a slope with Eb/No as -h„R c From Table 2, we know that R c =3/7, -hm = 3, so that Pb(e) should decrease bv a factor oi hmR c =3 6 when the signal-to-noise ratio increases by / (not in dB) This behavior fully agrees with the curves of Figure 18 Finally, the curves of Figure 18 show a divergence of the bound at lower Et/No for increasing N This is due to coefficients of terms with /; > hm in Expression (16) that increase with N and whose influence becomes more important for larger N
Table 2 Design parameters of CCs and SMCBCs for three senallv concatenated block codes
Outer code Inner code SMCBC
Figure imgf000017_0001
Code SMCBC2 has d°/= 2, thus, from Equation (21 ), we expect the same interleaver gain as lor SMCBC1, 1 e , Λ' -/
This is confirmed bv the curves of Figure 19 This code, however, has a larger minimum distance hm = 7, and a rate R c =4/15 Thus, we expect a steeper descent of Pb(e) with £V/Vø More precisely, we expect a decrease bv a factor of 65 when the signal-to-noise ratio increases by 1 This, too, is confirmed bv the curves, which also show the bound divergence predicted m our analysis
Code SMCBC3 has d°/ = J, thus, from Equation (21 ), we expect a larger mterleaver gam than for SMCBC 1 ,and SMCBC2, l e , N - 2 This is confirmed by the curves of Figure 20 which, tor a fixed and sufficiently large signal-to-noise ratio, show a decrease in Pb(e) of a factor of 100 when N passes from 7 to 70 from 70 to 700, and from 700 to 7000 This code has a minimum distance hm=5 and a rate R c =4/15, which means a descent of Pb(e) with Eb/No by a factor of 3 8 when the signal-to-noise ratio increases by 1 This, too, is confirmed by the curves As to the bound divergence, we notice a slightly different behavior with respect to previous cases The curve with N = 7000, in fact, denotes a strong influence of coefficients increasing with N for Eb/ o lower than 7 1 2 2 3 2 Senallv Multiple Concatenated Convolutional Codes
We consider four different SMCCCs obtamed as follows The first, SMCCC 1, is a (3.1.N) SMCCC, usmg as outer code a four-state (2,1 ) recursive, systematic convolutional encoder and as inner code a four-state (3,2) recursive, systematic convolutional encoder The second, SMCCC2, is a (3,1,N) SMCCC, usmg as outer code the same four-state (2,1 ) recursive, systematic convolutional encoder as SMCCC 1, and as mner code a four-state (3,2) nonrecursive convolutional encoder The third, SMCCC3, is a (3,1,N) SMCCC, usmg as outer code a four-state (2,1) nonrecursive, convolutional encoder, and as mner code the same four-state (3,2) recursive, systematic convolutional encoder as SMCCC 1 Finally, the fourth, SMCCC4, is a (6,2,N) SMCCC usmg as outer code a four-state (3,2) nonrecursive convolutional encoder, and as inner code a four-state (6,3) recursive, systematic convolutional encoder obtained by using three times the four-state (2,1) recursive, systematic convolutional encoders in Table 2 The outer, inner, and SMCCC code parameters introduced in the design analysis are listed in Table 3 In this table, the CCs are identified through the descπptions of Table 2 In Figures 21, 22, 23, and 24, we plot the bit-eπor probability bounds for SMCCCs 1,2,3, and 4 of Table 3, with mput information block lengths R°CN = 100, 200, 300, 400, 500, and 1000
Consider first the SMCCCs employing as mner CCs recursive, convolutional encoders They are SMCCC 1, SMCCC3, and SMCCC4 Code SMCCC 1 has d°/= 5, thus, from Expression (31), we expect an mterleaver gam behaving as Λ" - J This is fully confirmed by the curves of Figure 21, which, for a fixed and sufficiently large signal-to-noise ratio, show a decrease in Pb(e) of a factor of 1000 when N passes from 200 to 2000 For an even more accurate confirmation, one can compare the mterleaver gain for every pair of curves m the Figure 21 Moreover, from Expression (31 ), we expect in each curve for In Pb(e) a slope with Eb/No as -h(ctM )Rc From Table 3, we know that Rc=l/3 and II(OM )=7,so that Pb(e) should decrease by a factor of 10 3 when the signal-to-noise ratio increases bv 1 This behavior fully agrees with the curves of Figure 21 Finally, the curves of Figure 21 do not show a divergence of the bound at lower E/ No for increasmg N Tins is due to the choice of a recursive encoder for the inner code, which guarantees that all coefficients a(h) decrease with N
Table 3 Design parameters of MCCs and SMCCCs for four SMCCCs Outer code Inner code SMCCC
Figure imgf000018_0001
Code SMCCC3 differs from SMCCC 1 only m the choice of a nonrecursive outer encoder, which is a four-state encoder (see Tables 2 and 3) with the same f/as for SMCCC 1, but with w°m = 1 instead of w"m = 2
From the design conclusions, e expect a slightly better behavior from this SMCCC This is confirmed by the performance curves of Figure 23, which present the same mterleaver gam and slope as those of SMCCC 1 but have a slightly lower Pb(e) (the curves for SMCCC3 are translated versions of those of SMCCC 1 bv 0 1 dB)
Code SMCCC4 employs the same CCs as SMCCC2 but reverses their order It uses as outer code a rate 2/3 nonrecursive convolutional encoder, and as mner code, a rate 1/2 recursive convolutional encoder As a consequence, it has a lower d°f = 3 and a higher CCM = - 2 Thus, from Expression (31), we expect a lower mterleaver gam than for SMCCC 1 and SMCCC3 as N - 2 This is confirmed by the curves of Figure 24, which, for a fixed and sufficiently large signal-to-noise ratio, show a decrease m Pb(e) of a factor of 100 when N passes from 150 to 1500 As to the slope with Ei No, this code has the same -)I(CCM)R C as SMCCC 1 and SMCCC3 and, thus, the same slope On the whole, SMCCC4 loses more than 2 dB m codmg gam with respect to SMCCC3 This result confirms the design rule suggestmg the choice of an outer code with d°/ as large as possible
Finally, let us consider code SMCCC2, which differs from SMCCC 1 in the choice of a nonrecursive mner encoder, with the same parameters but with the crucial difference of w'm = 1 Its bit-eπor probability curves are shown in Figure 22 We see, m fact, that for low signal-to-noise ratios, say below 3, no mterleaver gam is obtained This is because the performance is dominated by the exponent h(θM λ whose coefficient increases with N On the other hand, for larger signal-to-noise ratios, where the doimnant contnbution to Pb (e) is the exponent with the lowest value of hm , the interleaver gam makes its appearance From Expression (22), we foresee a gam as N - 4, meaning four orders of magnitude for N passing from 100 to 1000 Curves m Figure 22 show a smaller gain (slightly higher than 1/1000), which is, on the other hand, rapidly increasing
1 3 Parallel Multiple Concatenated Convolutional Codes
The concept of Parallel Multiple Concatenated Convolutional Code (PMCCC) utilizes a soft-output decodmg and an iterative decodmg We present two versions of a simplified maximum a posteπon (MAP) decoding algonthm The algonthms work in a sliding window form (like the Viterbi algoπthm) and can thus be used to decode contmuously transmitted sequences obtamed by PMCCC, without requinng code trellis termination A heuπstic explanation is also given of how to embed the maximum a postenoπ algonthms into the iterative decoding of PMCCC The performances of the two algoπthms are compared on the basis of a powerful rate 1/3 PMCCC Basic circuits to implement the simplified a postenon decoding algoπthm usmg lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed
The broad framework of this analysis encompasses digital transmission systems where the received signal is a sequence of waveforms whose coπelation extends well beyond T, the signaling peπod There can be many reasons for this coπelation, such as codmg, lntersymbol interference (ISI), or coπelated fading The optimum receiver m such situations cannot perform its decisions on a svmbol-by-symbol basis, so that deciding on a particular information symbol Uk involves processing a portion of the received signal Tj seconds long, with Tj>T The decision rule can be either optimum with respect to a sequence of symbols, «"! = (ui >Uk+ι < < Uk+n l) > or with respect to the individual symbol, m
The most widely applied algoπthm lor the first kind of decision rule is the Viterbi algoπthm In its optimum formulation, it would require waiting for decisions until the whole sequence ha e been received In practical implementations this drawback is overcome bv anticipating decisions (single or in batches) on a regular basis with a fixed delay, D Choice of D as five to six times the memory of the received data is widely recognized as a good compromise between performance (.omplexitv, and decision delay Optimum symbol decision algoπthms must base their decisions on the maximum a posteπoπ (MAP) probability They have been known smce the early seventies, although much less popular than the Viterbi algoπthm and almost never applied in practical svstems There is a very good reason tor this neglect in that they yield performance m terms of symbol error probability onlv slightly supeπor to the Viterbi algoπthm, vet thev present a much higher conceptual complexity Only recently, the interest in these algonthms has seen a revival in connection with the problem of decodmg concatenated codmg schemes Concatenated codmg schemes (a class m which we mclude product codes, multilevel codes, generalized concatenated codes, and senal and parallel concatenated codes) were proposed as a means of achieving large codmg gams by combining two or more relatively simple "constituent" codes The resulting concatenated codmg scheme is a powerful code endowed with a structure that permits an easy decodmg, like "stage decoding" or "iterated stage decodmg" To work properly, all these decodmg algoπthms cannot limit themselves to passing the symbols decoded by the mner decoder to the outer decoder Thev need to exchange some kind of soft mformation The optimum output of the inner decoder should be m the form of the sequence of the probability distributions over the inner code alphabet conditioned on the received signal, the a posteπoπ probability (APP) distπbution There have been several attempts to achieve, or at least to approach, this goal Some of them are based on modifications of the Viterbi algoπthm so as to obtain, at the decoder output, addition to the "hard"-decoded symbols, some reliability mformation This has led to the concept of "augmented-outpuf* or the list-decoding Viterbi algoπthm, and to the soft-output Viterbi algoπthm (SOVA) These solutions are clearly sub-optimal, as they are unable to supply the required APP A different approach consisted m revisiting the oπginal symbol MAP decodmg algoπthms with the aim of simplifying them to a form suitable for implementation Figure 25 shows a PMCCC whose encoder is formed by two (or more) constituent systematic encoders joined through an interleaver The mput information bits feed the first encoder and, after having been interleaved by the mterleaver, enter the second encoder The codeword of the PMCCC compnses of the mput bits to the first encoder followed by the paπty check bits of both encoders Generalizations to more than one mterleaver are possible and fruitful
The sub-optimal iterative decoder is modular and compnses of a number of equal component blocks formed by concatenating soft decoders of the constituent codes (CC) separated by the mterleavers used at the encoder side By increasmg the number of decoding modules and, thus, the number of decoding iterations, bit-error probabilities as low as 105 at Eb/No =0 0 dB for rate 1/4 PMCCC have been shown by simulation
We will descπbe two versions of a simplified MAP decoding algoπthm that can be used as building blocks of the iterative decoder to decode PMCCCs A distinctive feature of the algoπthms is that they work in a "sliding window" form, like the Viterbi algoπthm. and thus can be used to decode "continuously transmitted" PMCCCs, without requiring trellis termination and a block-equivalent structure of the code
The final aim is to find suitable soft-output decoding algonthms for iterated staged decodmg of PMCCC employed in a continuous transmission
We will refer to the transmission system of Figure 26 The mformation sequence u, composed of symbols drawn from an alphabet L ={ ui ,UM } and emitted by the source, enter an encoder that generates code sequences c Both source and code sequences are defined over a tune index set K (a finite or infinite set of integers) Denoting the code alphabet C={cι, ,CM } , the code C can be wntten as a subset of the Cartesian product of C by itself K times, ι e , C c C f The code symbols t (the index A will refer to time) enter the modulator, which performs a one-to-one mapping of them with its signals, or channel mput symbols x i , belonging to the set X={xι, ,XM }
The channel symbols iare transmitted over a stationary memorvless channel with output symbols vi The channel is characteπzed bv the transitions probability distπbution (discrete or continuous, according to the channel model) P ( \x) The channel output sequence is fed to the svmbol-bv-svmbol soft-output demodulator, which produces a sequence ol probabi tv distπbutions y\ (cj over C conditioned on the received signal according to the memorvless transformation Yk (c) = P(xk = x(cJ,yk) = P(yk \xk = x(c)) Pk (c) = γk (x) (33) where we have assumed to know the sequence of the a pπon probability distπbutions of the channel mput symbols (Pk (x) keK) and made use of the one-to-one mappmg C →X
The sequence of probability distπbutions γ * (c) obtamed by the modulator on a symbol-by-symbol basis is then supplied to the soft-output svmbol decoder, which processes the distπbutions in order to obtam the probability distnbutions P * (u\ y) They are defined as
Pk ( \y) = P(uk = u\ v) (34) The probability distπbutions
Figure imgf000021_0001
y) are referred to m the literature as symbol-by-symbol a posteπoπ probabilities (APP) and represent the optimum symbol-by-symbol soft output From here on, we will limit ourselves to the case of time-invanant convolutional codes with N states, use the following notations with reference to Figure 27, and assume that the (integer) tune instant we are interested m is the k*
(1) S, is the geneπc state at time k, belonging to the set S = {S ; , ,S N}
(2) ST (u ') is one of the precursors of S, , and precisely the one defined bv the information symbol « ' emitted during the transition ST (U') → S? (3) ST (U) 1S °ne of the successors of S, , and precisely the one defined bv the information symbol « emitted during the transition S, -+Sf (u) (4) To each transition in the trellis, a signal x is associated, which depends on the state from which the transition onginates and on the information symbol u determining that transition When necessary, we will make this dependence explicit by writing x(u ', S, J when the transition ends m S, and x(S, , ) when the transition ongmates from S, 1 3 1 The BCJR Algonthm
The BC JR is the optimum algoπthm to produce the sequence of APP We consider first the oπgmal version of the algoπthm, which applies to the case of a fimte mdex set K = {1, ,n} and requires the knowledge of the whole received sequence ;μ =(yι, ,y„ ) to work In the following, the notations u, c, x, and v will refer to sequences n-svmbols long, and the mteger time vaπable A will assume the values 1, ,n As for the previous assumption, the encoder admits a trellis representation with N states, so that the code sequences c (and the coπesponding transmitted signal sequences x) can be represented as paths in the trellis and uniquely associated with a state sequence 5 - (so. ,SN) whose first and last states, so and SH , are assumed to be known by the decoder
Defining the a posteπoπ transition probabilities from state S, at time A as σk(S, . ) = P(uk = » . sk.ι = S, I y) (35) The APP P (ιι\y) we want to compute can be obtamed as
Pk (u\y) = ∑ σk (S, ,u) (36)
Thus, the problem of evaluating the APP is equivalent to that of obtaining the a postenoπ transition probabilities defined in Equation (35) The APP can be computed as σk (S, . ) = hσ ak-ι(S γk (x(S, , u)) βk (ST (u)) (37) where
• h σ is such that ∑ σk (S, , u) = 1
• A MS, 11)) are the joint probabilities already defined in Equation (33) l e vk (xi = P( vk . Xk = x) = P( Vk\ xk - r) P( xk - x> (38) The s can be calculated from the knowledge of the a pπon probabilities of the channel input symbols x and of the transition probabilities of the channel P (y \xk=x) For each time k, there are M different values of γ to be computed, which are then associated to the trellis transitions to form a sort of branch metπcs This information is provided by the symbol-by-symbol soft-output demodulator ctk (S, ) are the probabilities of the states of the trellis at time k conditioned on the past received signals, namely, ctk(S,) = P (sk = S, \ y)) (39) where v, denotes the sequence^;, , vk They can be obtamed by the forward recursion ak(S,) = ha ∑ aia (S ( )) γk(x(u,S,)) (40) with ha a constant determined through the constraint
Σak(S,J = /
Si and where the recursion is initialized as
, , \ 1 >/S, = so) ao(S,J = (41)
{ 0 otherwise)
• β k (S, ) ^e the probabilities of the trellis states at time k conditioned on the future received signals P(sic= S, \y +i )
Thev can be obtamed by the backward recursion βk(S,) = hβ Σ βk+l (ST (u)) γk+I(x(S, .u)) (42) with hβ a constant obtainable through the constraint
Σ βk(S,) = I s, and where the recursion is initialized as
Figure imgf000022_0001
We can now formulate the BCJR algoπthm by the following steps
(1) Initialize < 0 and β accordmg to Equations (41) and (43)
(2) As soon as each term yk of the sequence v is received, the demodulator supplies to the decoder the "branch metπcs" γ k of Equation (38), and the decoder computes the probabilities ctk according to Equation (40) The obtained values of a (S, ) as well as the γ k are stored for all A, s, , and x (3) When the entire sequence y has been received, the decoder recursively computes the probabilities βk accordmg to the recursion of Equation (42) and uses them together with the stored a's and βs to compute the a posteπoπ transition probabilities a.k (S, , ) according to Equation (37) and, finally, the APP Pk (u\y) from Equation (36)
1 3 2 The Sliding Window BCJR (SW-BCJR
The BCJR algoπthm requires that the whole sequence have been received before starting the decoding process In this aspect, it is similar to the Viterbi algonthm in its optimum version To apply it in a PMCCC, we need to subdivide the information sequence into blocks, decode them bv terminating the trellises of both CCs, and then decode the received sequence block bv block Bevond the πgiditv, this solution also reduces the overall code rate A more flexible decodmg strategy is offered by a modification of the BCJR algonthm in which the decoder operates on a fixed memory span, and decisions are forced with a given delay D We call this new, and sub-optimal, algoπthm the sliding wmdow BCJR (SW-BCJR) algonthm We will descπbe two versions of the slidmg window BCJR algoπthm that differ m the way they overcome the problem of initializing the backward recursion without having to wait for the entire sequence We will descπbe the two algoπthms using the previous step descπption suitably modified Of the previous assumptions, we retain only that of the knowledge of the initial state so , and thus assume the transmission of semi-infinite code sequences, where the tune span K ranges from 1 to oo
1 32 1 The First Version of the Slidmg Wmdow BCJR Algonthm (SW1-BCJR) Here are the steps (1) Initialize o accordmg to Equation (41)
(2) Forward recursion at tune k Upon receiving yk, the demodulator supplies to the decoder the M distmct branch metncs, and the decoder computes the probabilities ak (S,) according to Equations (38) and (40) The obtamed values of at (Si) are stored for all S, , as well as the k (x)
(3) Initialization of the backward recursion (k>D) βk(Sj) = ak(Sj) V sy (44)
(4) Backward recursion It is performed accordmg to Equation (42) from time k - 1 back to time k -D
(5) The a posteπon transition probabilities at time k -D are computed according to ak.D(S, ,u) = hσ ak-D-i(S,) Yk.D(x(S, .u)) βk.D(ST( )) (45)
(6) The APP at time k -D is computed as
Figure imgf000023_0001
For a convolutional code with parameters (ko .no), number of states N , and cardinality of the code alphabet,
M = 2"° , the SW1-BCJR algoπthm requires storage of N χD values of α's and M ><D values of the probabilities γ k (x) generated by the soft demodulator Moreover, to update the α's and β's for each time instant, the algoπthm needs to perform
M x2ko multiplications and N additions of 2ko numbers To output the set of APP at each time instant, we need a D-times long backward recursion Thus, the computational complexity requires overall (D+1) Mx2ko multiplications and (D +1) M additions of 2ko numbers each
As a compaπson, the Viterbi algonthm would require, in the same situation, Mx2ko additions and Afx2ko -way compansons, plus the trace-back operations, to get the decoded bits
1 3 2 2 The Second. Simplified Version of the Sliding Window BCJR Algoπthm (SW2-BCJR) A simplification of the slidmg window BCJR that significantly reduces the memory requirements compnses of the following steps
(1) Initialize aa accordmg to Equation (41 )
(2) Forward recursion (k > D) If A > D, the probabilities ak ^p- (S,) are computed accordmg to Equation (40)
(3) Initialization of the backward recursion (A > D) βk (Sj) = ^ Sj (47)
(4) Backward recursion (A > D) It is performed accordmg to Equation (42) from time k-1 back to time k-D
(5) The a posteπoπ transition probabilities at time A - D are computed according to ak.D(S,, ) = hσ ak.D-ι(S,) Yk_D(x(S,,u)) βk.D(S7(u)) (48)
(6) The APP at time k - D is computed as
Pk-D(u\y) = Σσk.D(S,,u) (49)
This version of the slidmg window BCJR algonthm does not require storage of the N x D values of a's as thev are updated with a delay of D steps As a consequence, only N values of a's and M*D values of the probabilities γk (x) generated by the soft demodulator must be stored The computational complexity is the same as the previous version of the algonthm
However, smce the initialization of the β recursion is less accurate, a larger value of D should be set in order to obtam the same accuracy on the output values Pk.o( \y) This observation will receive quantitative evidence in the section devoted to simulation results
1 3 3 Additive Algoπthms
1 3 3 1 The Log-BCJR
The BCJR algoπthm and its slidmg window versions have been stated in multiplicative form Owing to the monotomcity of the logaπthm function, thev can be converted mto an additive form by conversion to logaπthms Let us define the following logaπthmic quantities
Tk(x) = log [γ(χ)l
Ak(S,) = log fak(S,)J
Bk(S.) = log (βk(S,)J
k(S„u) = log [σk(S„ )]
These definitions lead to the following A and B recursions, denved from Equations (40), (42), and (37)
Ak(S.) = log
Figure imgf000024_0001
* Vk(x(u.St))) HA (50)
B (S.) = log £ g(t .S7WJ + i.,< SM)) + HB (51)
Σk(S, . ) = Ak.,(S,) Tk(x(S, , )) + Bk(S7 ( )) + Hz (52) with the following initializations
I >/S, = so)
Ao(S = -∞ otherwise)
Figure imgf000024_0002
1 3 3 2 Simplified Versions of the Log-BCJR
The problem in the recursions defined for the log-BCJR compnses of the evaluation of the logaπthm of a sum of exponential log Σ e
An accurate estimate of this expression can be obtained bv extracting the term with the hi hest exponential,
AM = max { A, } so that log = AM T log I 1 - ∑ e[At ' A"> | (53) and by computmg the second term of the πght-hand side (RHS) of Equation (53) usmg lookup tables
However, when AM >> A , , the second term can be neglected This approximation leads to the additive logaπthmic- BCJR (AL-BCJR) algonthm Ak(S,) = max f Ak.,(S;(")) + rk(x( ,S,)) \ + HA (54)
Bk(S,) = max [ Bk+ι(ST( )) + rk+ι(ST(u)) \ + HB (55)
Σk (S, ,u) = Ak.i(S,) + Tk (x(S„u)) + Bk (S7 (u)) + Hz (56) with the same initialization of the log-BCJR
Both versions of the SW-BCJR algonthm descnbed can be used, with obvious modifications, to transform the block log-BCJR and the AL-BCJR mto their slidmg wmdow versions, leadmg to the SW-log-BCJR and the SWALl-BCJR euid SWAL2-BCJR algoπthms 1 3 4 Explicit Algoπthms for Some Particular Cases
In this section, we will make explicit the quantities considered in the previous algoπthms' descnptions by making assumptions on the code type, modulation format, and channel 1 3 4 1 Rate 1/n Binary Systematic Convolutional Encoder
In tins section, we particularize the previous equations in the case of a rate 1/n binary systematic encoder associated to n binary-pulse amplitude modulation (PAM) signals or binary phase shift keying (PSK) signals
The channel symbols x and the output symbols from the encoder can be represented as vectors of n b ar components
Figure imgf000025_0001
Xk = [Xk , Xkn] yk = f vkl , , Vkn] where the notations have been modified to show the vector nature of the symbols The joint probabilities γ k (x) , over a memorvless channel, can be split as γk (X) = = Xm) P(xkm = Xk) (57)
Figure imgf000025_0002
Smce in this case the encoded symbols are n-tuple of binary symbols, it is useful to redefine the input probabilities, γ, m terms of the likelihood ratios
P ( vkm l Xkm = A)
Λkm
P ( V m I Xkm A)
P ( Xkn A) km
P ( Xkm A) so that, from Equation (57),
Figure imgf000025_0003
where hγ takes into account all terms independent of .r The BCJR can be restated as follows ak(S,) = hγ ha ∑ ak.,(STM) π [λkm λi,JC uSl> (58) m=l βk(S.) = hv hβ ∑ βk+1(S7(u)) π [λ(k+nm λ jc„(Sι u)
Dm. (59) m=l σk(S,,u) = h7 hσ ak.ι(S [ λ(k+ιtm λfk+I)mI 'cm[u si I βk(ST(u)) (60) m=l whereas its simplification, the AL-BCJR algoπthm, becomes
Ak(S,) = max Ak.,(S(u)) + ∑ Cm(u,S,) (Λkm + λ ) + HA (61) m=l
Bk(S,) = max \ Bk+ι(STM) + ∑ cm(S,,u)(hkm A&J + HB (62)
« I m=l J
∑k(S,,u) = Ak-ι(S,)+ Σcm(S,,u)(Λkm + Nkm) + Bk(St(u)) (63) where A stands for the logaπthm of the coπespondmg quantity λ 1342 The Additive White Gaussian Noise Channel
When the channel is an additive white Gaussian noise (AWGN) channel, we obtain the explicit expression of the log likelihood ratios Λfe as
Figure imgf000026_0001
Hence, the AL-BCJR algoπthm assumes the following form f " 2A
Ak(S.) = max Ak-ι(S()) + ∑ cm(u,S,) (—ykm + AkJ \ + HA (64) u [_ m=l σ~
Bk(St) = max \ Bk+ι(S (u)) + Σ cm(S,, ) (~ykm - Nk A m) \ + HB (65) 1 m=l σ~
" 2A
k(S,,u) = 4k.ι(S,)+ Σcm(S,,u)(^ykm + Λ^ + Bk(St()) (66) m=ι cr
We will consider turbo codes with rate 1/2 component convolutional codes transmitted as binary PAM or binary PSK over an AWGN channel 135 Iterative Decoding of Parallel Concatenated Convolutional Codes
In this section, we will show how the MAP algonthms previously descπbed can be embedded into the iterative decoding procedure of parallel concatenated codes We will deπve the iterative decoding algoπthm through suitable approximations performed on maximum-likelihood decodmg The descπption will be based on the fairly general parallel concatenated code shown m Figure 28, which employs three encoders and three interleavers (denoted by πm the figure) Let Uk be the binary random vaπable taking values in {0,1 }, representing the sequence of information bits u=(uι, ...w . The optimum decision algoπthm on the kJh bit «* is based on the conditional log-likelihood ratio Lk
P(uk = i\y)
Lk = log
P(uk = 0\y)
Figure imgf000027_0001
∑ P(y\x(u)) \~l P(u,) _ Lk = loga^zl + Xog p<^ - l> (67)
Σ P(y\x( )) U P(u}) 6 P(uk = o) u.ut=0 j≠k where, m Equation (67), P (uj) are the a pπoπ probabilities
If the rate kXn0 constituent code is not equivalent to a punctured rate 1/n O code (in this case the information send is the mformation data and one paπty bit, the paπty bit sent is different every time) or if turbo trellis-coded modulation is used, we can first use the symbol MAP algoπthm as descπbed in the previous sections to compute the log-likelihood ratio of a symbol »=«/, ..Uh,, given the observation y as - ,og m
P( \y) where 0 coπesponds to the all-zero symbol Then we obtain the log-likelihood ratios of the /* bit within the symbol by
Σ e λ( )
L(uj) = log
Σ eλ(u)
In this way, the turbo decoder operates on bits, rather than symbol, mterleavmg is used To explam the basic decodmg concept, we restnct ourselves to three codes, but extension to several codes is straightforward In order to simplify the notation, consider the combination of the mterleaver and the constituent encoder connected to it as a block code with mput u and outputs x, , i =0,1,2,3 (xo=u) and the coπespondmg received sequences as v, , i =0,1,2,3 The optimum bit decision metnc on each bit is (for data with umform a pπoπ probabilities)
∑ P(y0\u) P(y,\u) P(y2\u) P(y3\u)
Lk = log-^ (68)
Σ P(y0\u) P(\u) P(y2\u) P(y3\u) . k=0 but, m practice, we cannot compute Equation (68) for large n because the permutations ΛS , πj imply that vi and .vj are no longer simple convolutional encodmgs of u Suppose that we evaluate P(y,\u),ι=0,2,3 in Equation (68) using Bayes' rule and usmg the following approximation
P(u\y,) * i p,(uk) (69) k=l
Note that P(ιι\y, ) is not separable in general However, for ; =0, P(u y o ) is separable, hence, Equation (69) holds with equality So we need an algoπthm that approximates a nonseparable distπbution P (u\y , j = P with a separable v __ distπbution Jl p, (uk ) = Q The best approximation can be obtained using the Kullback cross-en tropv mimmizer, which k=l minimizes the cross-entropy H(0,P) =E{log(Q/P )} between the input P and the output 0
The MAP algoπthm approximates a nonseparable distπbution with a separable one, however it is not clear ho good it is compared with the Kullback cross-entropy mnumizer Here we use the MAP algoπthm for such an approximation In the iterative decodmg, as the reliability of the fuk } improves mtuitiveh one expects that the cross-entropv between the input and the output of the MAP algoπthm will decrease, so that the approximation will unprove If such an approximation, 1 e , Equation (69), can be obtamed, we can use it m Equation (68) for ι=2 and ι=3 (by Bayes' rule) to complete the algonthm Define ιlk by
Figure imgf000028_0001
where u * ε {0, 1} To obtain pt ) or, equivalents ( £lk j , we use Equations (69) and (70) for i =0,2,3 (by Baves' rule) to express Equation (68) as
Lk =f(y, . Lo- ∑2> Li ■ k) + Lok + Σ 2k +3k (71) where ok = 2Ayd<T (for bιn<ary modulation) and
Σ P(y,\«) Yle"!'1"*1**1^ f(y, . Lo. L2, L3,k) = log ∑ p(yιW neUj(Zv+ Zv + llj) (72) u, k= j*k We can use Equations (69) and (70) agam, but this tune for ; =0, 1,3, to express Equation (68) as =f(y2- ∑o< ∑i ■ Σ3. k) + Lok " I + L 3k (73) and similarly,
Lk =f(y3. Lo + Zi . L2 -k) + Lok + Lik + L2k (7 ) A solution to Equations (71 ), (73), and (74) is
Figure imgf000028_0002
L2k = f(y3> Lo< L, . L3 , k) (75)
L3k = f( y3. Lo< Li • L2- k) for k =1,2, ,n, provided that a solution to Equation (75) does indeed exist The final decision is then based on
Lk = Lok + Lik + L k ÷ L3k (76) which is passed through a hard limiter with zero threshold We attempted to solve the nonlinear equations m Equation (75) for i <∑ ' ancl L by using the iterative procedure
l> = a<r>f(y,. to. rT. LT'.VOD fork =1.2, ,n, iterating on m Similar recursions hold for
Figure imgf000028_0003
and ι'™k
We start the recursion with the initial condition 2", = 2 =2 = Lo For the computation off(), we can use any MAP algoπthm as descπbed in the previous sections, with interleavers (direct and inverse) where needed, call this the basic decoder D, , 1 =1,2,3 The Jf^ , 1 =1,2,3 represent the extπnsic information The sign flow graph for extrinsic information is shown in Figure 29, which is a fully connected graph without self-loops Parallel, seπal, or hybπd implementations can be realized based on the signal flow graph (in this figure vo is considered as part of vy) Based on our equations, each node's output is equal to internally generated reliability L minus the sum of all mputs to that node The BCJR MAP algoπthm always starts and ends at the all-zero state since we always terminate the trellis We assumed m=l ldentitv, however, anv πι can be used
The overall decoder is composed of block decoders D , connected m parallel, as in Figure 30 (when the switches are m position P), which can be implemented as a pipeline or by feedback A seπal implementation is also shown in Figure 30 (when the switches are in position 5 For those applications where the svstematic bits are not transmitted or tor parallel concatenated trellis codes with high-level modulation, we should set jT0 =0 Even m the presence of systematic bits, if desired, one can set o = 0 m^ consider vo as part of v; If the systematic bits are distπbuted among encoders, we use the same distπbution for yo among the received observations tor MAP decoders
At this point, further approximation tor iterative decoding is possible it one term coπespondmg to a sequence u dommates other terms m the summation in the numerator and denommator of Equation (72) Then the summations m Equation (72) can be replaced by "maximum" operations with the same indices, I e , replacmg ∑u Uk-ι with max „ „*., for / =0,1 A similar approximation can be used for 2k and 3k m Equation (75) This sub-optimal decoder then coπesponds to an iterative decoder that uses AL-BCJR rather than BCJR decoders As discussed, such approximations have been used by replacmg, ∑" with "max" in the log-BCJR algoπthm to obtam AL-BCJR Clearly, all versions of SW-BCJR can replace BCJR (MAP) decoders m Figure 30
For turbo codes with only two constituent codes, Equation (77) reduces to
∑ " = ctr>f( y, . LO . tim) . v
∑ " = a'2 m)f( y2 . ∑o . ∑T' . k) for A =1,2, ,n, and m =1.2, , where, for each iteration, a{m) and aψ> can be optimized (simulated annealing) or set to 1 for simplicity The decodmg configuration for two codes is shown m Figure 31 In this special case, since the paths m Figure 31 are disjointed, the decoder structure can be reduced to a seπal mode structure if desired If we optimize a{m> and ψ . this requires estimates of the vaπances of Lχk and ι2k for each iteration m the presence of eπors
In the results presented m the next section, we will use a PMCCC with only two constituent codes 1 3 6 Simulation Results In this section, we will present some simulation results obtamed applying the iterative decodmg algoπthm, which, m turn, uses the optimum BCJR and the sub-optimal, but simpler, SWAL2-BCJR as embedded MAP algoπthms All simulations refer to a rate 1/3 PMCCC with two equal, recursive convolutional constituent codes with 16 states and generator matπx
1 + D + D3 + D4
G(D) 1
1 + D3 + D4 and an interleaver of length 16,384, usmg an S-random permutation with S = 40 Each simulation run examined at least 25,000,000 bits In Figure 32, we plot the bit-eπor probabilities as a function of the number of iterations of the decodmg procedure usmg the optimum block BCJR algoπthm for vaπous values of the signal-to-noise ratio It can be seen that the decoding algoπthm converges down to bit eπor rate (BER) = 10"5 at signal-to-noise ratios of 0 2 dB with nine iterations The same curves are plotted in Figure 33 for the case of the sub-optimum SWAL2-BCJR algonthm In this case, 0 75 dB of signal-to-noise ratio is required for convergence to the same BER and with the same number of iterations In Figure 34, the bit-eπor probability versus the signal-to-noise ratio is plotted for a fixed number (5) of iterations of the decoding algoπthm and for both optimum BCJR ,and SWAL2-BCJR MAP decoding algonthms It can be seen that the penalty inclined by the sub-optimum algoπthm amounts to about 0 5 dB
Algoπthms were of the block type The penalty is completely attnbutable to the approximation of the sum of exponentials To veπfy this, we have used a SW2-BCJR and compared its results with the optimum block BCJR, obtaining the same results
Finallv, in Figures 35 and 36, we plot the number of iterations needed to obtain a given bit-eπor probability versus the bit signal-to-noise ratio, for the two algoπthms These curves provide information on the delav incurred to obtam a given rehabilitv as a function of the bit signal-to-noise ratio 1 3 7 Circuits to Implement the MAP Algoπthm for Decoding Rate 1/n Component Codes of a PMCCC
We show the basic circuits required for the implementation of a senal additive MAP algoπthm for both block log-BCJR auid SW-log-BCJR Extension to a parallel implementation is straightforward Figure 37 shows the implementation of Equation (50) for the forward recursion usmg a lookup table for evaluation of log(l+e" ), and subtraction of max AkfS J} from AkjS J is used for normalization to prevent buffer overflow The circuit for maximization can be implemented simply by usmg a comparator and selector with feedback operation Figure 38 shows the implementation of Equation (51 ) for the backward recursion, which is similar to Figure 37 A circuit for computation of log(P k (u\y)J from Equation (36) usmg Equation (52) for final computation of bit reliability is shown m Figure 39 In Figure 39, switch 1 is m position 1 and switch 2 is open at the start of operation The circuit accepts ∑ k (S < ,u) for ; = /, then switch 1 moves to position 2 for feedback operation The circuit performs the operations for i =1,2, ,N When the circuit accepts ∑k (S t ,u) for ; = N , switch 1 goes to position 1 .and switch 2 is closed This operation is done for u=l and u=0 The difference between log (Pk (l\y)) and log (Pk (0\y)) represents the reliability value required for turbo decodmg, l e , the value of L k m Equation (67)
We propose two simplifications to be used for computation of log (l+ex) without using a lookup table Approximation 1 We used the approximation log(l+e ") « -ax+ b , 0 < x < b/a where b=log(2), and we selected a =0.3 for the simulation We observed about 0 ldB degradation compared with the full MAP algoπthm for the code The parameter a should be optimized, .and it may not necessanly be the same for the computation of Equation (50), Equation (51 ), and log(Pk(u\y)) from Equation (36) using Equation (52) We call this "linear" approximation Approximation 2 We take
\ 0 ιf x > η] 0%0 + e-χ) *
[ c ιf x < η \ We selected c=log(2) and the threshold η =1.0 for our simulation We observed about 02 dB degradation compared with the full MAP algoπthm for the code This threshold should be optimized for a given SNR, and it may not necessanly be the same for the computation of Equation (50), Equation (51 ), and log(P (u\y)) from Equation (36) usmg Equation (52) If we use this approximation, the log-BCJR algoπthm can be built based on addition, companson, and selection operations without requiring a lookup table, which is similar to a Viterbi algoπthm implementation We call this "threshold" approximation At most, 8 to 10 bit representation suffices for all operations 1 3 8 Trellis Termination
If needed, the encoder in Figure 28 may generate a n(N + M), N) block code, where the M tail bits of encoder 2 and encoder 3 are not transmitted Since the component encoders are recursive, it is not sufficient to set the last M information bits to zero in order to dπve all the encoder to the all zero state, l e to terminate the trellis The termination (tail) sequence depends on the state of each component encoder after N bits, which makes it impossible to terminate the component encoders with just Mbits This issue has not been resolved m previously proposed turbo code implementations Fortunately, the simple stratagem illustrated in Figure 33 is sufficient to terminate the trellis at the end of the block (The code shown is not important) Here the switch is m position "A" for the first N clock cycles and is in position "B" for M additional cycles, which will flush the encoders with zeros The decoder does not assume knowledge of the hi tail bits The same termination method may be used for all encoders
1 3 9 Weight Distπbution
In order to estimate the performance of a code, it is necessary to have information about its mmimum distance, eight distπbution, or actual code geometry, dependmg on the accuracy required tor the bounds or approximations The challenge is m finding the paiπng of codewords from each individual encoder, induced bv a particular set of interleavers Intuitively, we would like to avoid joining low-weight codewords from one encoder with low-weight words from the other encoders In the example of Figure 28, the component codes have distances 5, 2 and 2 This will produce a worst-case minimum distance of 9 for the overall code Note that this would be if the encoders were not recursive since, in this case, the minimum weight word for all three encoders is generated by the input sequence u=(00...0000100...000) with a single "1", which will appear again m the other encoders, for any choice of interleavers This motivates the use of recursive encoders, where the key ingredient is the recursiveness and not the fact that the encoders are systematic For our example, the input sequence u=(00...00100100. 000) generates a low weight codeword with weight 6, for the first encoder If the interleavers do not "break" this input pattern, the resulting code-words weight will be 14 In general weight-2 sequences with 2+3/ zeros separating the 1 's would result in a total weight of 14+6/ if there were no permutations permutations before the second and third encoders, a 2 sequence with its l 's separated by 2+3/ι zeros will be permuted mto two other weight-2 sequences with 1 's separated by 2+3/y zeros, /=2,3,.. where each t, is defined as a multiple of 1/3 If any U is not an integer, the coπesponding encoded output will have a high weight because then the convolutional code output is non-terminating (until the end of the block) If all /,'s are mtegers, the total encoded weight will be 14+2∑ι-ι3/, Thus, one of the considerations m designing the mterleaver is to avoid integer tπplets (/;, h, ts) that are simultaneously small in all three components In fact, it would be nice to design an mterleaver to guarantee that the smallest value of ∑-i3 /, (for integer /,) grows with the block size N
For companson we consider the same encoder structure in Figure 28, except with the roles of ga and gb reversed Now the minimum distances of the three component codes are 5, 3, and 3, producing an overall minimum distance of 11 for the total code without any permutations This is apparently a better code, but it turns out to be infeπor as a turbo code This paradox is explained by agam consideπng the cntical weight-2 data sequences For this code, weight-2 sequences with 1+2/; zeros separating the two 1 's produce self-terminating output and hence low-weight encoded words In the turbo encoder, such sequences will be permuted to have separations 1+2/,, ι= 2, 3, for the second and third encoders, where now each ( is defined as a multiple of 1/2 But now the total encoded weight for mteger tπplets (//, fc, t3) is 11+∑,~ 3 U Notice how this weight grows only half as fast with ∑,= 3 as the previously calculated weight for the oπgmal code If ∑,=ι3t, can be made to grow with block size by proper choice of mterleaver, then clearly it is important to choose component codes that cause the overall weight to grow as fast as possible with the individual separations /, This consideration outweighs the cπtenon of selecting component codes that would produce the highest minimum distance if unpermuted There are also many weight-/?, n = 3, 4, 5, ., data sequences that produce self-terminating output and hence low encoded weight However, these sequences are much more likely to be broken up by the random interleavers than the weight-2 sequences and are therefore likelv to produce non-terminating output from at least one of the encoders Thus, turbo code structures, which would have low minimum distances if unpermuted, can still perform well if the low-weight codewords of the component codes are produced by input sequences with weight higher of two 1 3 10 Weight Distπbution with random interleavers
Now we bπefly examine the issue of whether one or more random mterleavers can avoid matching small separations between the l's of a weight-2 data sequence with equally small separations between the l 's of its permuted versιon(s) Consider for example a particular weight-2 data sequence (..001001000...) which coπesponds to a low weight codeword m each of the encoders of Figure 28 If we randomly select an mterleaver of size N, the probability that this sequence will be permuted into another sequence of the same form is roughly 2/JV (assuming that N is large, and ignoring mmor edge effects) The probability that such an unfortunate pairing happens for at least one possible position of the oπgmal sequence ( 001001000 ) within the block size of N. is approximately l-(l-2/Nf«l-e : This implies that the mimmum distance of a two-code turbo code constructed with a random permutation is not likely to be much higher than the encoded weight of such an unpermuted weight-2 data sequence, e g 14 for the code m Figure 28 (For the orst case permutations, the dmm of the code is still 9, but these permutations are highly un kelv if chosen randomly) By contrast, if e use three codes and two different interleavers, the probability that a particular sequence ( 001001000 ) will be reproduced bv both interleavers is only (2 N): Now the probability of finding such an unfortunate data sequence somewhere within the block ol size ;V is roughly 1-(1-2'X) ~ 4/N Thus it is probable that a three-code turbo code using two random interleavers will see an mcrease m its minimum distance bevond the encoded weight of an unpermuted weight-2 data sequence This argument can be extended to account for other weight-2 data sequences which may also produce low weight codewords, e g (. .00100(000)' 1000 ), for the code in Figure 28 For compaπson, let us consider a weιght-3 data sequence such as (. 0011100 ..) which for our example coπesponds to the mimmum distance of the code (usmg no permutations) The probability that this sequence is reproduced with one random mterleaver is roughly 6/Λ^, and the probability that some sequence of the form (...0011100. ) is paired with another of the same form is l-fl-ό/N2)" * 6/N Thus for tege block sizes, the bad weιght-3 data sequences have a small probability of being matched with bad weιght-3 permuted data sequences, even in a two-code system For a turbo code usmg q codes and q-\ random interleavers this probability is even smaller, l-(l-(6/N2)q'l * (6/N)(6/N2)q : This implies that the minimum distance codeword of the turbo code m Figure 28 is more likely to result from a weight-2 data sequence of the form ( ..001001000. ) than from the weιght-3 sequence ( ..0011100...) that produces the minimum distance m the unpermuted version of the same code Higher weight sequences have even smaller probability of reproducmg themselves after being passed through a random mterleaver For a turbo code usmg q codes and q-\ interleavers, the probability that a weight-H data sequence will be reproduced somewhere within the block by all q-\ permutations is of the form l-(l-(βN"')q lf where β is a number that depends on the weight-w data sequence but does not increase with block size N For large N, this probability is proportional to l/N"* n'q , winch falls off rapidly with N, when n and q are greater than two Furthermore, the symmetry of this expression indicates that increasing either the weight of the data sequence n or the number of codes q has roughly the same effect on lowering this probability In summary, from the above arguments we conclude that weight-2 data sequences are an important factor in the design of the component codes, and that higher weight have decreasmg importance Also, increasing the number of coders may result in better turbo codes
The mimmum distance is not the most important quantity of the turbo code, except for its asymptotic performance, at very high Eι N0 At moderate SNRs, the weight distπbution for the first several possible weights is necessary to compute the code performance Estimating the complete weight distπbution of these codes for large N and fixed interleavers is still an open problem However, it is possible to estimate the weight distπbution for large N for random mterleavers by using probabilistic arguments
1 3 11 Interleaver design
Interleavers should be capable of spreading low-weight input sequences so that the resultmg codeword has high weight Block interleavers, defined by a matπx with υr rows and υc columns such that JV =υr x ΌC, may fail to spread certain sequences For example, the weight 4 sequence shown in Figure 39 cannot be broken by a block mterleaver In order to break such sequences random interleavers are desirable Block interleavers are effective if the low-weight sequence is confined to a row If low-weight sequences (which can be regarded as the combination of lower weight sequences) are confined to several consecutive rows, then the υc columns of the mterleaver should be sent in a specified order to spread as much as possible the low- weight sequence As can be observed m the example in Figure 39, the sequence 1001 will still appear at the mput of the encoders for anv possible column permutation Onlv if we permute the rows of the mterleaver m addition to its columns it is possible to break the low- weight sequences Appropnate selection of a, and q for rows and columns depends on the particular set of codes used and on the specific low-weight sequences that we would like to break We have also designed random permutations (interleavers) by generatmg random integers ; ,1 < ; < N, without replacement We define a "S-random" permutation as follows " each randomly selected integer is compared to S previously selected mtegers If the cuπent selection is equal to anv S previous selections within a distance of ± S, then the cuπent selection is rejected Tins process is repeated until all N integers are selected While the searchmg time increases with 5. e observed that choosing S <- Λ' 2)' 2 usually produces a solution m reasonable time (For S = 1 we have a purelv random interleaver) hi the simulations we used 5 = 1 1 for Λ' = 256 and S - 31 tor \ = 4096 The advantage ot using three or more constituent codes is that the coπespondmg two or more interleavers have a better chance to break sequences that were not taken care by another interleaver The disadvantage is that, for an overall desired code rate, each code must be punctured more, resulting m weaker constituent codes It has been shown that randomly selected interleavers and interleavers based on the row-column permutation descnbed above In general, randomly selected permutations are good for low SNR operation (e g , PCS applications requiπng Pb(e) = 103 ) where the overall weight distπbution of the code is more important than the mmimum distance 1 3 12 Performance with two codes
The performance obtained by turbo decodmg the code with two constituent codes (1, gb/ga ), where ga =(37)octal and gb =(21 )octal , and with random permutations of lengths N = 4096 and (Note that the components of the s coπespondmg to the tail bits are set to zero for all iterations) N = 16384 is compared in Figure 40 to the capacity of a binarv- mput Gaussian channel for rate r = 1/4 The best performance curve in Figure 40 is approximately 0 7 dB from the Shannon limit at BER=10-" 1 3 13 Performance with Unequal Rate Encoders
We now extend the results to encoders with unequal rates with two A' = 5 constituent codes (I gb/ga gc/ga) and (gb/ga ), where ga = (37)octal , gb =(33)octal and gc =(25)octal This structure improves the performance of the overall rate 1/4, code, as shown m Figure 40 This improvement is due to the fact that we can avoid usmg the mterleaved information data at the second encoder and that the rate ol the first code is lower than that of the second code For PCS applications, short interleaver should be used, since the vocoder frame is usually 20ms Therefore 192 and 256 bits interleavers as an example, coπespondmg to 9 6 and 13 Kbps The performance of codes with short mterleaver is shown m Figure 41 for the K= 5 codes descπbed above for random permutation and row-column permutation with a = 2 for rows and a = 4 for columns 1 3 14 Performance with Three Codes
The performance of a three-code turbo code with random interleavers is showing Figure 42 for ^=4096 The three recursive codes shown m Figure 28 where used for K = 3 Three recursive codes with ga = (\3)octal and gb =(\ \)octal were used for K = 4 Usmg K = 4 code has better performance than several others In Figure 42, the performance of the A" = 4 code was improved by going to 30 iterations and usmg a 5-random mterleaver with 5=31 For shorter blocks (192 and 256), the results are shown m Figure 41 where it can be observed that approximately 1 dB SNR is required for BER=103 , which implies a CDMA capacity C = 0 8η We have noticed that the slope of the BER curve changes around BER=105 (flattening effect) it the mterleaver is not designed properlv to maximize dm,„ or is chosen at random 1 4 Compaπson Between Parallel and Senallv Multiple Concatenated Codes In this section, we will use the bit-eπor probability bounds previously deπved to compare the performance of parallel and senally multiple concatenated block and convolutional codes 1 4 1 Parallel and Senallv Multiple Concatenated Block Codes
To obtam a fair companson, we have chosen the following PMCBC and SMCBC The PMCBC has parameters (11m, 3m, N) and employs two equal (7,3) systematic cyclic codes with generator g(D)=(l+D) (I + D + D3 ), the SMCBC, instead, is a ( 15m, 4m, N) SMCCC obtained bv the concatenation of the (7, 4) Hamming code with a (15, 7) BCH code
They have almost the same rates (Re, =0 266 and RcP =0 273), and have been compared choosing the mterleaver length m such a way that the decoding delav due to the mterleaver, measured in terms ol input information bits, is the same As an example, to obtam a delav equal to 12 mput bits, we must choose an interleaver length N=12 for the PMCBC and N =12 R°c=21 for the SMCBC The results are shown m Figure 25 where we plot the bit-eπor probability versus the signal-to-noise ratio Eb No lor vanous input delavs The results show that for low values ot the delav, the performances are almost the same On the other hand increasing the delav (and thus the interleaver length JV) yields a significant mterleaver ga for the SMCBC and almost no gain for the PMCBC. The difference in performance is 3 dB at Pb(e) =10 "* in favor of the
SMCBC.
1.4.2. Parallel and Serially Multiple Concatenated Convolutional Codes
To obtain a fair comparison, we have chosen the following PMCCC and SMCCC: The PMCCC is a rate 1/3 code obtained concatenating two equal rate 1/2, four-state systematic recursive convolutional codes with a generator matrix as in the first row of Table 2. The SMCBC is a rate 1/3 code. It is form using as an outer code the same rate 1/2, four-state code as in the PMCCC and, as an inner code, a rate 2/3, four-state systematic recursive convolutional code with a generator matrix as in the third row of Table 2. Also, in this case, the interleaver lengths have been chosen so as to yield the same decoding delay, due to the interleaver, in terms of input bits. The results are shown in Figure 44, where we plot the bit-eπor probability versus the signal-to-noise ratio Et/No for various input delays.
The results show the great difference in the interleaver gain. In particular, the PMCCC shows .an interleaver gain going as N -1 , whereas the interleaver gain of the SMCCC, as from Expression (31), goes as N - ( d°/ + 1)/2 = N -3 , since the free distance of the outer code is equal to 5, which is odd. This means, for Pb(e) =10 '", a gain of more than 2 dB in favor of the SMCCC. Previous comparisons have shown that serial concatenation is advantageous with respect to parallel concatenation in terms of maximum-likelihood performance. For long interleaver lengths, this significant result remains a theoretical one, as maximum-likelihood decoding is an almost impossible achievement. 2. A SISO MAP Module to Decode Parallel and Serial Multiple Concatenated Codes
Multiple concatenated coding schemes with interleavers comprises a combination of two simple constituent encoders and an interleaver. The parallel concatenation has been shown to yield remarkable coding gains close to theoretical limits, yet admitting a relatively simple iterative decoding technique. The serial concatenation of interleaved codes may offer a superior performance. In both coding schemes, the core of the iterative decoding structure is a soft-input soft-output (SISO) module. Here, we describe the SISO module in a form that continuously updates the maximum a posteriori (MAP) probabilities of input and output code symbols and show how to embed it into iterative decoders for parallel and serially concatenated codes. 2.1. Introduction
Both concatenated coding schemes admit a suboptimum decoding process based on the iterations of the MAP algorithm applied to each constituent code. Here we describe a SISO module that implements the MAP algorithm in its basic form, the extension of it to additive MAP (log-MAP), which is indeed a dual-generalized Viterbi algorithm with coπection, and finally extension to the continuous decoding of PMCCC and SMCCC. As examples of applications, we will show the results obtained by decoding two low-rate codes, with very high coding gain. 2.2. Iterative Decoding of Parallel and Serial Concatenated Codes
In this section, we show the block diagram of parallel and serially concatenated codes, together with their iterative decoders, both iterative decoding algorithms need a particular module, named SISO, which implements operations strictly related to the MAP algorithm. 2.2.1. Parallel Concatenated Codes
The block diagram of a PMCCC is shown in Figure 45 (a) (the same construction also applies to block codes). In Figure 45, a rate 1/3 PMCCC is obtained using two rate 1/2 constituent codes (CCs) and an interleaver. For each input information bit, the codeword sent to the channel is formed by the input bit, followed by the parity check bits generated by the two encoders. In Figure 45 (b), the block diagram of the iterative decoder is also shown. It is based on two modules denoted by "SISO," one for each encoder, an interleaver, and a deinterleaver performing the inverse permutation with respect to the interleaver.
The SISO module is a four-port device (quadriport), with two inputs and two outputs. Here, it suffices to say that it accepts as inputs the probability distributions of the information and code symbols labeling the edges of the code trellis, and forms as outputs an update of these distπbutions based upon the code constraints In Figure 45 (b) can be seen that the updated probabilities of the code symbols, are never used by the decodmg algonthm 2 22 Senallv Multiple Concatenated Codes
The block diagram of a SMCCC is shown m Figure 46 (a) (the same construction also applies to block codes) In Figure 46 (a), a rate 1/3 SMCCC is obtamed usmg as an outer encoder a rate 1/2 encoder, and as an mner encoder a rate 2/3 encoder An mterleaver permutes the output codewords of the outer code before passmg them to the inner code In Figure 46 (b), the block diagram of the iterative decoder is shown It is based on two modules denoted by "SISO", one for each encoder, an mterleaver, and a deinterleaver The SISO module is the same as descπbed before In this case, though, both updated probabilities of the input and code symbols are used m the decoding procedure 2 2 3 Soft-Output algoπthms
The SISO module is based on MAP algoπthms These algoπthms perform both forward and backward recursions and, thus, require that the whole sequence be received before starting the decoding operations As a consequence, they can onlv be used m block-mode decodmg The memory requirement and computational complexity grow linearly with the sequence length Some algoπthms require only a forward recursion, so that it can be used in continuous-mode decoding However, its memory and computational complexity grow exponentially with the decoding delav It is possible to use a MAP symbol-by-symbol decoding algoπthm conjugatmg the positive aspects of other algoπthms, I e , a fixed delay and linear memory and complexity growth with decodmg delay All these algonthms are truly MAP algoπthms To reduce the computational complexity, vaπous forms of suboptunum soft-output algoπthms can be used Two approaches have been taken The first approach tnes to modify the Viterbi algoπthm These augmented outputs mclude the depth at which all paths are merged, the difference m length between the best and the next-best paths at the pomt of mergmg, and a given number of the most likely path sequences The same concept of augmented output was later generalized for vaπous applications A different approach to the modification of the Viterbi algoπthm compnses of generating a reliability value for each bit of the hard-output signal and is called the soft-output Viterbi algoπthm (SOVA) In the binary case, the degradation of SOVA with respect to MAP is small, however, SOVA is not as effective m the nonbmary case The second approach compnses of revisiting the oπgmal symbol MAP decoding algoπthms with the ami of simplifying them to a form suitable for implementation 2 3 The SISO Module 2 3 1 The Encoder
The decoding algoπthm underlying the behavior of SISO works for codes admitting a trellis representation It can be a time-invaπant or time-varying trellis, and, thus, the algoπthm can be used for both block and convolutional codes In Figure 47, we show a trellis encoder, charactenzed by the following quantities (In the following, capital letters U
C, S, E will denote random vaπables and lower-case letters u, c, s, e their realizations The letter P [A] will denote the probability of the event A, whereas the letter P(a) will denote a function of a The subscπpt A will denote a discrete time, defined on the time index set K Other subscπpts, like ;, will refer to elements of a fimte set Also, "()" will denote a tune sequence, whereas "{}" will denote a finite set of elements ) 1 U =( Uk)keK is the sequences of mput symbols, defined over a time index set K (finite or infinite) and drawn from the alphabet U = { u ι . , u Nl j To the sequence of mput symbols, we associate the sequence of a pnoπ probability distπbutions P (u, I) = [Pk(uk . I))keK where P (uk ) = P fUk = uk] 2 C=( ' Ck)keK is the sequences ol output, or code, symbols, defined over the same time index set K, and drawn from the alphabet C - j c / c Vo j To the sequence oi output symbols we associate the sequence ol a pnoπ probabi tv distπbutions P (c, I) = ( P (ck . V)k κ For simplicity of notation, we drop the dependency of uk and ck on k Thus,
Pk(u , I) and Pk(c , I) will be denoted simply by P k(u, I) and P k(c , I) respectively 2 3 2 The Trellis Section
The dynamics of a time-invaπant convolutional code are completely specified by a smgle trellis section, which descnbes the transitions (edges) between the states of the trellis at time instants A and k +1 A Trel sO section is characterized by the following
(1) A set of TV states S == { si . sn } The state of the trellis at time A is &= 5, with s e S
(2) A set of N xA7/ edges obtamed by the Cartesian product E = Sx U = \ eι , eNxM, } which represents all possible transitions between the trellis states The following functions are associated with each edge e e E (see Figure 48)
( 1 ) The starting state ss (e) (the projection of e onto S)
(2) The ending state (e)
(3) The mput symbol u(e) (the projection of β onto U)
(4) The output symbol c(e) The relationship between these functions depends on the particular encoder As an example, in the case of systematic encoders, (^ (e) c(e)) also identifies the edge smce u(e) is uniquely determined by c(e) In the following, we only assume that the pair (S5 (e).u(e)) uniquely identifies the endmg state f (e), this assumption is always veπfied, as it is equivalent to say that, given the initial trellis state, there is a one-to-one correspondence between input sequences and state sequences, a property required for the code to be uniquely decodable 2 3 3 The SISO Algoπthm
The SISO module is a four-port device that accepts at the mput the sequences of probability distπbutions P (c, I) P (u, I) and outputs the sequences of probability distπbutions P (c, O) P (u, O) based on its mputs and on its knowledge of the trellis section (or code m general) We assume first that the time index set K is finite, l , K = fl, ,n) The algoπthm by which the SISO operates in evaluating the output distπbutions will be explained in two steps In the first step, we consider the following algoπthm
(1 ) At time A the output probability distπbutions are computed as
Pk (c O) = Hc Σ Ak ss(e)] Pk[ (e), I] pk [c(e) I] Bk [sE(e)] (78) e c(e)=c
Pk( , 0) = Hu Σ Ak.,[ss(e)] Pk[u(e), I] Pk[c(e), I] Bk[sE(e)] (79) e c(e)=u
(2) The quantities A k () and B () are obtamed through the forward and backward recursions, respectively, as Ak(s) = Σ Ak ,[ss(e)J Pk[ (e), IJ Pkfc(e), IJ , k = l , , n (80)
Bk (s) = Σ Bk+I[sε(e)J Pk +, (u(e), IJ pk +l[c(e), I] k = n - l , 0 (81) e (e)=s with initial values f 1 s = So) Ao(s) = (82)
I 0 otherwise) f * S = Sn)
B„(s) ~ (83)
[ 0 othenvise) The quantities fjc , fϊu are normalization constants defined as follows
Figure imgf000037_0001
Hu ∑ Pk("- 0) = 1 u
In the second step, from Equations (78) and (79), it is apparent that the quantities P fc(e), I] in the first equation and Pk [u(e), I] in the second do not depend on e, by definition of the summation mdices, and thus can be extracted from the summations Thus, defining the new quantities
Figure imgf000037_0002
where Hc and Hu are normalization constants such that
Figure imgf000037_0003
Hu → Σ Pk( . O) = 1
It can be easily veπfied that they can be obtamed through the expressions
Pk(c O) = He He ∑ Ak.,[ss(e)J Pk [u(e), I] Bk[sE(e)] (84)
Pk(u, 0) = Hu Hu ∑ Ak-ι[ss(e)] Pk[c(e), I] Bk[sE(e)] (85) e c(e)=u where the A 's and B's satisfy the same recursions previously introduced in Equation (80) The new probability distπbutions Pk (u.O) and Pk (c.O) represent a smoothed version of the input distπbutions Pk
(c,I) and Pi (u,I), based on the code constraints .and obtamed usmg the probability distnbutions of all symbols of the sequence but the kth ones, Pk (c,I) and Pk (u,I) In the literature of PMCCC decoding, P (u.O) and P (c.O) would be called extπnsic mformation Thev represent the added value of the SISO module to the a pπoπ distπbutions P I) and Pk(c.I) Basing the SISO algonthm on Pk ( ,0) instead of on pk ( , O) simplifies the block diagrams, and related software and hardware, of the iterative schemes tor decoding concatenated codes The SISO module is then represented as m Figure 49
Previously proposed algoπthms were not m a form suitable for working with a general trellis code Most of them assumed bmarv input symbols, some also assumed systematic codes, and none (not even the ongmal BCJR algoπthm) could cope with a trellis havmg parallel edges As can be noticed, (from all summations mvolved m the equations that define the SISO algoπthm) we work on trellis edges rather than on pairs of states This makes the algoπthm completely general and capable of coping with parallel edges and also with encoders with rates greater than one, like those encountered m some concatenated schemes 2 34 Computation of Input and Output Bit Extrinsic Information
In this subsection, bit extnnsic information is deπved from the symbol extπnsic information usmg Equations (84) and (85) Consider a rate A<0 trellis encoder such that each input symbol U compnses of k0 bits and each output symbol C compnses ot n„ bits Assume
Figure imgf000037_0004
Pk( I) = h Pt j (uj . I) (87) where cJ e {0 1} denotes the value of the jth bit CJ k of the output symbol C k = c j =1 n„ , and u J e {0 1} denotes the alue ot the jth bit J k of the output symbol U * = u j ~1 k0 This assumption is valid in an iterative decoding when bit interleavers rather than symbol interleavers are used One should be cautious when usmg Pk (c, I) as a product for those encoders m a concatenated system where the output C m Figure 47 is connected to a channel For such cases, if, for an example, a nonbinary mput additive white Gaussian noise (AWGN) channel is used, this assumption usually is not needed (this will be discussed shortly), and Pk (c, I)=Pk (c\y)= Pk (y\x(c))P (c)/P (y), where v is the complex received sample(s) and x(c) is the transmitted nonbinary symbol(s) Then, tor binary mput memoryless channels, Pk(y\x(c)) can be written as a product After obtaining symbol probability distπbutions Pk (c, O) and Pk (u, O) from Equations (84) and (85) by usmg Equations (79) and (81), it is easy then to show that the input and output bit extπnsic information can be obtained as
Pk.j (c' . O) = Hc> ∑ Pk(c. O) fl Pk.j (d . D (88) c c{=cJ ι=hf]
Pk] (uX 0) = H > Σ Pk(u, 0) π Pk j ( ' . I) (89) u U{=uJ ι= xj where H, and Hj are normalization constants such that
Figure imgf000038_0001
Equation (86) is not used for those encoders in a concatenated coded system connected to a channel To keep the expressions general, as is seen from Equations (80), (81 ), and (89), Pk c(e),I] is not represented as a product
In the following sections, for simplicity of notation, the probability distπbution of symbols rather than of bits is considered The extension of the results to probability distπbutions of bits based on the above deπvations is straightforward 2 4 The Sliding- Window Soft-Input Soft-Output Module (SW-SISO)
As previous descπption should have made clear, the SISO algoπthm requires that the whole sequence has been received before starting the smoothing process The reason is due to the backward recursion that starts from the
(supposed-known) final trellis state As a consequence, its practical application is limited to the case when the duration of the transmission is short (n small) or, tor n long, when the received sequence can be segmented mto independent consecutive blocks, like for block codes or convolutional codes with trellis termination It cannot be used for continuous decodmg ol convolutional codes This constraint leads to a frame, that πgidity imposed on the svstem and also reduces the overall code rate A more flexible decoding strategy is offered by modifying the algonthm This modification is in such a way, that the
SISO module operates on a fixed memory span and outputs the smoothed probability distπbutions after a given delay D This new algoπthm is called the s ding-wmdow soft-input soft-output (SW-SISO) algoπthm (and module) We propose two versions of the SW-SISO that differ in the way they overcome the problem of initializing the backward recursion without waitmg for the entire sequence From now on, we assume that the time index set K is semi-infinite, I e , K={1, , ∞}, and that the initial state so is known
2 4 1 First Version of the Shding-Window SISO Algoπthm (SW-SISOl ) The SW-SISOl algoπthm compnses ot the following steps
( 1 ) Initialize Ao according to Equation (82)
(2) Forward recursion at time A Compute the A k through the forward recursion of Equation (80) (3) Initialization of the backward recursion (time A "-> D )
Figure imgf000038_0002
(4) Backward recursion It is pertormed accordmg to Equation (81 ) from iterations ;=/ to ι=D as Bk° ) = Σ BKll,[sE(e)] Pk [u(e), I] Pk [c(e), I] (91 ) and
Figure imgf000039_0001
(5) The probability distπbutions at tune k -D are computed as Pk (c. O) = He He ∑ Ak.D-ι fss(e)J Pk.D[u(e), IJ Bk-D[sE(e)J (93) e c(e)=c
Pk-o(u. O) = He He ∑ Ak-D-ι[ss(e)] Pk-D[c(e), I] Bk-D[sE (e)] (94) e u(e}=u
2 4 2 The Second Simplified Version of the Shding-Window SISO Algonthm (SW-SISQ2)
A further simplification of the slidmg- window SISO algoπthm, which is similar to SW-SISOl except for the backward initial condition, that significantly reduces the memory requirements compnses of the following steps ( 1 ) Initialize Ao according to Equation (82)
(2) Forward recursion at time k, k > D Compute the A k D through the forward recursion
Ak.D(s) = Σ Ak-D-ι[ss(e)] Pk.D Me), I] Pk.D[c(e),I] . k > D (95) e s(e)=s
(3) Initialization of the backward recursion (time k > D)
Bψ(s) = - Vs (96)
N (4) Backward recursion (time k > D) It is performed according to Equation (91 ) as before
(5) The probability distπbutions at tune A - D are computed accordmg to Equations (93) and (94) as before 2 4 3 Memory and Computational Complexity 2 4 3 1 - Algoπthm SW-SISOl
For a convolutional code with parameters (ko ,no ) and number of states N , so that Ni - 2 ko and o = 2 "° , the algoπthm SW-SISOl requires storage of N χ£) values of A's and D(N 1 + No ) values of the input unconstramed probabilities Pk (u, I) and Pk (c, I) Moreover, to update the A's and B's for each time instant, it needs to perform 2 xλ7 A7 / multiplications and N additions of Λ7 / numbers To output the set ot probability distπbutions at each time instant, we need a D-times long backward recursion Thus, overall the computational complexity reqwres the following 2(D+l)χN* N ; multiplications and (D +l)xN* (N i -l) additions 2 4 3 2 Algoπthm SW-SISQ2
This simplified version of the shding-window SISO algoπthm does not require the storage of the N*D values of A's, as they are updated with a delay of D steps As a consequence, only N values of A's and D (Nj + No ) values of the mput unconstramed probabilities P (u, I) and Pk (c, I) need to be stored The computational complexity is the same as that for the previous version of the algoπthm However, smce the initialization of the B recursion is less accurate, a larger value of D may be necessary
2 5 The Additive SISO Algoπthm (A-SISO)
The shding-wmdow SISO algoπthms solve the problems of continuously updating the probability distπbutions, without requiπng trellis terminations Their computational complexity, however, is still high when compared to other suboptimal algonthms like SOVA This is due mainlv to the fact that they are multiplicative algonthms We overcome this drawback bv proposing the additive version of the SISO algonthm Clearly, the same procedure can be applied to its two slidmg- window versions, SW-SISOl and SW-SIS02 To convert the previous SISO algonthm from multiplicative to additive form, we exploit the monotoiucity of the logaπthm function, and use for the quantities P(u; ),P(c; ). A, and B their natural logaπthms, accordmg to the following defimtions πk(c;I) = log [pk (c, D] πk(u;I) = log [ pk (u; I) 'J πk(c;0) = log [Pk(c;0)J πk( ;0) = log fpk(u;0)J ak(s) = log [Ak(s)J βk(s) = log [ k(s)]
With these defimtions, the SISO algonthm defined by Equations (84) and (85) and Equations (80) and (81) becomes the following- At time A, the output probability distπbutions are computed as πk(c;0) = log
Figure imgf000040_0001
πk ( ;0) = log „[c(e).l} +
Figure imgf000040_0002
(98)
Figure imgf000040_0003
where the quantities «t () and βk () are obtained through the forward and backward recursions, respectively, as ak (s) = log + πl[c(e).l)) k = 1 n (99)
Figure imgf000040_0004
πt c(e),l]) βk® = l08 Σ + + k = n - 1 ,..., 0 (100) with initial values
0 s = So) a0(s) -oo otherwise]
Figure imgf000040_0005
The quantities hc and Λ„ are normalization constants needed to prevent excessive growth of the numeπcal values of the a's and β's
The problem in the previous recursions compnses in the evaluation of the logaπthm of a sum of exponential like (in general)
L log Σ (101)
To evaluate a in Equation (101), we can use two approximations, with mcreasmg accuracy (and complexity) The first approximation is a = log * au (102) where we have defined a ι - max a, } . > = 1
This approximation assumes that aM > > a, , V; = / ,. ., L It is almost optimal for medium-high signal-to-noise ratios and leads to performance degradations of the order of 0 5 to 0 7 dB for very low signal-to-noise ratios
Usmg Equation (102), the recursions of Equations (99) and (100) become ak(s) = , k = 1 n (103)
βk(s) = m , k = n - 1 , .. 0 (104)
Figure imgf000041_0001
and the Λ S of Equations (97) and (98) become πk(c;0) = max { αt-J M + πk [ (e) ; I ] + βk \sE(e) 1 } + hc (105) e cfe)=c t L J I J πk( , 0) + πk [c(e) ; I ] + βk \sE (e) ] } + hu (106)
Figure imgf000041_0002
When the accuracy of the previously proposed approximation is not sufficient, we can evaluate "a " in Equation (101 ) usmg the following recursive algoπthm
J" ai a® = max { a'1-" , a, } + log 1 + 1 = 2 ,
= a^ To evaluate a, the algoπthm needs to perform (L - 1) times two kinds of operations a compaπson between two numbers to find the maximum, and the computation of log [ 1 + e'- ) ] Δ ≥ 0
The second operation can be implemented usmg a single-entry look-up table up to the desired accuracy Therefore, a in Equation (101) can be written as
Figure imgf000041_0003
* faj. The second term, δ ( ai , a ,. .. aι), is called the coπection term and can be computed using a look-up table, as discussed above Now, if desired, max can be replaced by max* in Equations (103) through (106)
Clearly, the additive form of the SISO algoπthm can be applied to both versions of the shding-window SISO algoπthms descπbed m the previous section, with straightforward modifications 2 6 Applications of the ASW-SISO Module Consider a PMCCC obtained using as constituent codes two equal rate 1/2 systematic, recursive, 16-state convolutional encoders with generatmg matπx
J + D + D3 + D4
G(D) J ,-
J + D3 + D4
The mterleaver length is N= 16,384 The overall PMCCC forms a very powerful code for possible use in applications requinng reliable operation at very low signal-to-noise ratios The performance of the continuous iterative decodmg algoπthm, applied to the concatenated code, is obtamed bv simulation, using the ASW-SISO and the look-up table algonthms It is shown in Figure 50, where we plot the bit-eπor probability as a function of the number of iterations of the decodmg algoπthm for vaπous values of the bit signal-to-noise ratio, Eb No It can be seen that the decoding algoπthm converges up to an eπor probability of 105, for signal-to-noise ratios of 0 2 dB with nine iterations Moreover, convergence is guaranteed also at signal-to-noise ratios as low as 0 05 dB, which is 0 55 dB trom the Shannon capacity limit As a second example, we construct the seπal concatenation ot two convolutional codes (SMCCCs) usmg as an outer code the rate 1/2, 8-state nonrecursive encoder with generating matrix
G(D) = [ 1 + D + D 3 1 + D J and as an inner code, the rate 1/2, 8-state recursive encoder with generating matrix
1 + D * D3
G(D) 1
1 + D
The resulting SMCCC has rate 1/4 The mterleaver length has been chosen to ensure a decodmg delav in terms of mput mformation bits equal to 16,384
The performance of the concatenated code, obtamed by simulation as before, is shown m Figure 51 , where we plot the bit-eπor probability as a function of the number of iterations of the decoding algoπthm for vaπous values of the bit signal-to-noise ratio, E/ No It can be seen that the decoding algoπthm converges up to an eπor probability of 10'5, for signal-to-noise ratios of 0 10 dB with nine iterations Moreover, convergence also is guaranteed at signal-to-noise ratios as low as -0 10 dB, which is 0 71 dB from the capacity limit
As a third, .and final, example, we compare the performance of a PMCCC and an SMCCC with the same rate and complexity The concatenated code rate is 1/3, the CCs are tour-state recursive encoders (rates 1/2 + 1/2 for the PMCCCs and rates 1/2 + 2/3 for the SMCCCs), and the decodmg delays in terms of input bits are equal to 16,384 In Figure 52, we report the bit-eπor probability versus the signal-to-noise ratio for six and nine decodmg iterations As the curves show, the PMCCC outperforms the SMCCC for high values of the bit-eπor probabilities Below 10"5 (for rune iterations), the SMCCC behaves significantly better and does not present the "floor" behavior typical of PMCCCs In particular, at 10"*, the SMCCC has an advantage of 0 5 dB with nine iterations 3 ADSL svstems
Figures 53, 54, 55 and 56 are models for facilitating accurate and concise DMT signal waveform descπptions In the Figures 53, 54, 55 and 56 Z, is DMT sub-earner < (defined in the frequency domam), and x„ is the w"1 IDFT output sample (defined m the time domam) The DAC and analog processing block construct the contmuous transmit voltage waveform coπespondmg to the discrete digital input samples More precise specifications for these analog blocks aπse indirectly from the analog transmit signal lineaπty and power spectral density specifications The use of Figures 53, 54, 55 and 56 as a transmitter reference model allows all initialization signal waveforms to be descπbed through the sequence of DMT symbols, {Z,}, required to produce that signal Allowable differences in the characteπstics of different digital to analog and analog processing blocks will produce somewhat different contmuous-time voltage waveforms for the same initialization signal 3 1 ATU-C transmitter reference models ATM and STM are application options ATU-C and ATU-R mav be configured for either STM bit sync transport or
ATM cell transport 3 1 1 ATU-C transmitter reference model for STM transport
Figure 53 is a block diagram of an ADSL Transceiver Unit-Central office (ATU-C) transmitter showing the functional blocks and interfaces for the downstream transport of STM data The basic STM transport mode is bit seπal The framing mode used determines if byte boundaπes, if present at the
V-C interface, shall be preserved Outside the ASx/LSx senal interfaces data bvtes are transmitted MSB first All seπal processing in the ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered bv the ADSL as LSB As a result, the first incoming bit (outside world MSB) shall be the first processed bit inside the ADSL (ADSL LSB) ADSL equipment shall support at least bearer channels AS0 and LS0 downstream Support of other bearer channels is optional Two paths are shown between the Mux/Svnc control and Tone ordeπng, the "fast" path provides low latencv the interleaved path provides verv low eπor rate and greater latency An ADSL system supporting STM, shall be capable of operatmg m a dual latencv mode for the downstream direcUon, m which user data is allocated to both paths (1 e fast and mterleaved) An ADSL svstem supportmg STM, shall be capable of operatmg m a smgle latency mode for both the downstream and upstream dnections, in which all user data is allocated to one path (1 e fast or mterleaved) An ADSL system supportmg STM transport may be capable of operatmg m an optional dual latency mode for the upstream, m which user data is allocated to both paths (1 e fast and mterleaved) 3 1 2 ATU-C transmitter reference model for ATM transport
Figure 54 is a block diagram of an ADSL Transceiver Unit-Central office (ATU-C) transmitter showing the functional blocks and interfaces that are referenced in ITU-T G 992 1 Recommendation for the downstream transport of ATM data Byte boundaπes at the V-C interface shall be preserved m the ADSL data frame. Outside the ASx/LSx seπal interfaces data bytes are transmitted MSB first All seπal processing m the ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB The first incoming bit (outside world MSB), will be the first processed bit inside the ADSL (ADSL LSB) The CLP bit of the ATM cell header will be earned in the MSB of the ADSL frame byte (l e , processed last), ADSL equipment shall support at least bearer channel ASO downstream) Two paths are shown between the Mux/Svnc control and Tone ordermg, the "fast" path provides low latency, the interleaved path provides very low eπor rate and greater latency An ADSL system supporting ATM transport shall be capable of operatmg m a smgle latency mode, m which all user data is allocated to one path (l e fast or interleaved) An ADSL system supportmg ATM transport may be capable of operatmg m an optional dual latency mode, in which user data is allocated to both paths (l e fast and interleaved) 3 2 ATU-R transmitter reference models
ATM and STM are application options ATU-C and ATU-R may be configured for either STM bit sync transport or ATM cell transport 3 2 1 ATU-R transmitter reference model for STM transport
Figure 55 show a block diagram of an ATU-R transmitter showing the functional blocks and interfaces that are referenced m this Recommendation tor the upstream transport of STM
The basic STM transport mode is bit seπal The framing mode used determines if byte boundanes, if present at the V-C interface, shall be preserved Outside the LSx senal interfaces data bvtes are MSB transmitted first All seπal processing m the ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB As a result, the first incoming bit (outside world MSB) will be the first processed bit mside the ADSL (ADSL LSB) ADSL equipment shall support at least bearer channel LSO upstream Two paths are shown between the Mux/Sync control and Tone ordermg, the "fast" path provides low latency, the interleaved path provides very low eπor rate and greater latency An ADSL system supportmg STM shall be capable of operating in a dual latency mode for the downstream direction, m which user data is allocated to both paths (l e fast and mterleaved) An ADSL system supportmg STM shall be capable of operating in a single latency mode for both the downstream and upstream directions, in which all user data is allocated to one path (l e fast or interleaved) An ADSL system supportmg STM transport may be capable of operatmg m an optional dual latency mode for the upstream, in which user data is allocated to both paths (I e fast and interleaved) 3 2 2 ATU-R transmitter reference model for ATM transport
Figure 56 show a block diagram ot an ATU-R transmitter showing the functional blocks and interfaces that are referenced in this Recommendation for the upstream transport of ATM data Byte boundaπes at the T-R interface shall be preserved in the ADSL data trame Outside the LSx senal interlaces data bvtes are transmitted MSB first in accordance with ITU-T Recommendations I 361 and 1432 All seπal processing m the ADSL frame (e g CRC, scrambling etc ) shall however, be performed LSB first with the outside world MSB considered bv the ADSL as LSB As a result the first incoming bit (outside world MSB) will be the first processed bit inside the ADSL (ADSL LSB), and the CLP bit of the ATM cell header will be earned m the MSB of the ADSL frame bvte (1 e , processed last) ADSL equipment shall support at least bearer channel LSO upstream Two paths are shown between the Mux/Sync control and Tone ordeπng, the "fast" path provides low latency, the mterleaved path provides very low error rate and greater latency An ADSL system supportmg ATM transport shall be capable of operatmg m a smgle latency mode, m which all user data is allocated to one path (1 e fast or mterleaved) An ADSL system supportmg ATM transport may be capable of operatmg in an optional dual latency mode, m which user data is allocated to both paths (1 e fast and mterleaved) 3 3 Transport Capacity
An ADSL system may transport up to seven user data streams on seven bearer channels simultaneously up to four independent downstream simplex bearers (unidirectional from the network operator (I e V-C interface) to the CI (I e T-R interface))
An ADSL system may transport up to three duplex bearers (bi-directional between the network operator and the CI) The three duplex bearers may alternatively be configured as independent unidirectional simplex bearers, and the rates of the bearers m the two chrections (network operator toward CI and vice versa) do not need to match
All bearer channel data rates shall be programmable m any combmation of integer multiples ot 32 kbit/s The ADSL data multiplexing format is flexible enough to allow other transport data rates, such as channelizations based on existmg 1 544 Mbit/s, but the support of these data rates (non-mteger multiples of 32 kbit/s) will be limited by the ADSL system's available capacity for synchronization
The maximum net data rate transport capacity of an ADSL system will depend on the characteristics of the loop on which the system is deployed, and on certam configurable options that affect overhead The ADSL bearer channel rates shall be configured during the initialization and training procedure
The transport capacity of an ADSL system per se is defined only as that of the bearer channels When, however, an ADSL system is mstalled on a lme that also carnes POTS or ISDN signals the overall capacity is that of POTS or ISDN plus ADSL
A distinction is made between the transport of synchronous (STM) and asynchronous (ATM) data An ATU-x shall be configured to support STM transmission or ATM transmission Bearer channels configured to transport STM data can also be configured to carry ATM data ADSL equipment may be capable of simultaneously supporting both ATM and STM transport
If an ATU-x supports a particular bearer channel it shall support it through both the fast and interleaved paths In addition, an ADSL system may transport a Network Timing Reference (NTR) 3 3 1 Transport of STM data
ADSL systems transporting STM shall support the simplex bearer channel ASO and the duplex bearer channel LSO downstream Bearer channels ASO, LSO, and any other bearer channels supported shall be mdependently allocable to a particular latency path as selected by the ATU-C at start-up The system shall support dual-latency downstream
ADSL systems transporting STM shall support the duplex bearer channel LSO upstream using a single latency path Bearer channel ASO shall support the transport of data at all mteger multiples of 32 kbit/s from 32 kbit/s to 6144 kbit/s
Bearer channel LSO shall support 16 kbit/s and all integer multiples of 32 kbit/s from 32 kbit/s to 640 kbit/s
When AS1, AS2, AS3, LSI and LS2 are provided, thev shall support the range of integer multiples of 32 kbit/s shown in Table 4 Support for data rates based on non-integer multiples of 32 kbit s is also optional Table 4 shows the required 32 kbit/s integer multiples for transport ol STM Table 4 Required 32 kbitts integer multiples tor transport of STM
Figure imgf000045_0001
Table 5 illustrates the data rate terminology and definitions used for STM transport Table 5 Data Rate Terminology for STM transport
Data Rate Equation (kbits/s) Reference Point
STMdata rate = "Net data rate" Σ(Bι,BF) X 32 ASx + LSx (NOTE)
"Net data rate" + Frame overhead rate "Aggregate data Σ(Kι,KF) X 32 A rate"
"Aggregate data + RS Codmg overhead = "Total data rate" Σ(NI,NF) X 32 B rate" rate
"Total data rate" + Trellis Codmg = Lme rate Σb , X 4 U overhead rate
NOTE - Net data rate mcrease by 16 kbit/s if a 16 kbit/s "C"-channel is used
3 3 2 Transport of ATM data
An ADSL system transporting ATM shall support the single latency mode at all integer multiples of 32kbιt/s up to
6 144 Mbit/s downstream and up to 640 kbit/s upstream For smgle latency, ATM data shall be mapped to bearer channel ASO m the downstream direction and to bearer channel LSO m the upstream direction The need for dual latency for ATM services depends on the service application profile, and is under study by the ITU
One of three different "latency classes" may be used Smgle latency, not necessanly the same for each direction of transmission, Dual latency downstream, single latency upstream, Dual latency both upstream and downstream
ADSL systems transporting ATM shall support bearer channel ASO downstream and bearer channel LSO upstream, with each of these bearer channels independently allocable to a particular latency path as selected by the ATU-C at start-up Therefore, support of dual latency is optional for both downstream and upstream
If downstream ATM data are transmitted through a smgle latency path (I e , 'fast' only or 'interleaved' only), onlv bearer channel ASO shall be used, and it shall be allocated to the appropnate latencv path If downstream ATM data are transmitted through both latency paths (I e , 'fast' and 'interleaved'), onlv bearer channels ASO and ASl shall be used, and thev shall be allocated to different latencv paths Similarly, if upstream ATM data are transmitted through a single latencv path (I e , 'fast' only or 'interleaved' onlv), onlv bearer channel LSO shall be used and it shall be allocated to the appropnate latencv path The choice of the fast or interleaved path mav be made mdependentlv ot the choice for the downstream data It upstream ATM data are transmitted through both latency paths (1 e , 'fast' and 'mterleaved'), only bearer channels LSO and LSI shall be used and they shall be allocated to different latency paths
Bearer channel ASO shall support the transport ot data at all integer multiples of 32 kbit/s from 32 kbit/s to 6144 kbit/s Bearer channel LSO shall support all integer multiples of 32 kbit/s from 32 kbit/s to 640 kbit/s Support for data rates based on non-integer multiples of 32 kbit/s is also optional
When ASl and LSI are provided, thev shall support the range of integer multiples ot 32 kbit/s shown in Table 4 Data rates based on non-integer multiples of 32 kbit/s is optional
Bearer channels AS2, AS3 and LS2 shall not be provided for an ATM based ATU-x
Table 6 illustrates the data rate terminology and defimtions used for ATM transport Table 6- Data Rate Terminology for ATM transport
Data Rate Equation kbits/s) RefPomt
53 x 8 x ATM cell rate "Net data rate" Σ(Bι BF) X 32 ASx + LSx
"Net data rate" + Frame overhead rate = "Aggregate data Σ(K, KF) X 32 A rate"
'Aggregate data rate" + RS Codmg overhead rate "Total data rate" Σ(Nt NF) X 32 B
'Total data rate" + Trellis Coding overhead = Lme rate Σb ι X 4 U rate
3 3 3 ADSL svstem overheads and total bit rates
The total bit rate transmitted by the ADSL system when operatmg in an optional reduced-overhead framing mode shall include capacity for the data rate transmitted m the ADSL bearer channels and ADSL system overhead (which includes an ADSL embedded operations channel, EOC, an ADSL overhead control channel, AOC, CRC check bytes, fixed mdicator bits for OAM, FEC redundancy bytes) When operatmg in the full-overhead mode the total bit rate shall also include capacity for the synchronization control bytes and capacity for bearer channel synchronization control
The internal overhead channels and their rates are shown in Table 7
Table 7 Internal overhead channel functions and rates
Figure imgf000047_0001
3 4 ATU-C Functional Charactenstics
An ATU-C may support STM transmission or ATM transmission or both Framing modes that shall be supported, depend upon the ATU-C be g configured for either STM or ATM transport If framing mode k is supported, then modes k- 1 , , 0 shall also be supported
During initialization, the ATU-C and ATU-R shall indicate a framing mode number 0, 1, 2 or 3, which thev intend to use The lowest indicated framing mode shall be used
Usmg framing mode 0 ensure than an STM based ATU-x with an external ATM TC will mteroperate with an ATM based ATU-x Additional modes of interoperation are possible depending upon optional features provided in either ATU-x
An ATU-C mav provide a Network Timing Reference (NTR) This operation shall be mdependent ot anv clocking that is internal to the ADSL svstem 3 4 1 STM Transmission Protocol Specific functionalities
3 4 1 1 ATU-C mput and output V interfaces for STM transport
The functional data interfaces at the ATU-C for STM transport are shown in Figure 57 Input interfaces for the highspeed downstream simplex bearer channels are designated ASO through AS3, input/output interfaces for the duplex bearer channels are designated LSO through LS2 There shall also be a duplex mterface for operations, admmistration, mamtenance (OAM) and control of the ADSL system 3 4 1 2 Downstream simplex channels
Four data mput interfaces are defined at the ATU-C for the high-speed downstream simplex channels ASO, ASl , AS2 and AS3 (ASx m general) 3 4 1 3 Downstream/upstream duplex channels
Three input and output data interfaces are defined at the ATU-C for the duplex channels supported by the ADSL system, LSO, LSI, and LS2 (LSx in general) LSO is also known as the "C" or control channel It carπes the signaling associated with the ASx bearer channels and it may also carry some or all of the signaling associated with the other duplex bearer channels 3 4 1 4 Payload transfer delav
The one-way transfer delay for payload bits m all bearers (simplex and duplex) from the V reference pomt at central office end (V-C) to the T reference pomt at remote end (T-R) for channels assigned to the fast buffer shall be no more than 2 ms For channels assigned to the interleave buffer it shall be no more than (4 + (S-l)/4 +SxD/4) ms The same requirement applies in the opposite direction, from the T-R reference pomt to the V-C reference point 3 4 1 5 Frammg Structure for STM transport
An ATU-C configured for STM transport shall support the full overhead frammg structure 0 The support of full overhead frammg structure 1 and the reduced overhead frammg structures 2 and 3 is optional Preservation of V-C mterface byte boundaπes (if present) at the U-C mterface may be supported for any of the U-C interface frammg structures An ATU-C configured for STM transport may support insertion of a Network Tuning Reference (NTR) 3 42 ATM Transmission Protocol Specific functionalities
3 4 2 1 ATU-C input and output V interface for ATM transport
The functional data interfaces at the ATU-C for ATM is shown m Figure 58 The ATM channel ATM0 shall alwavs be provided, the channel ATM1 may be provided for support of dual latency mode Each channel operates as an mterface to a physical layer pipe When operatmg m dual latency mode, no fixed allocation between the ATM channels 0 and 1 on one hand and transport of 'fast' and 'mterleaved' data on the other hand is assumed This relationship is configured inside the ATU-C
Flow control functionality shall be available on the V reference point to allow the ATU-C (l e the physical layer) to control the cell flow to and from the ATM layer. This functionality is represented by Tx_Cell_Handshake and
Rx_Cell_Handshake A cell may be transfeπed from ATM to PHY layer only after the ATU-C has activated the
Tx_Cell_Handshake Similarly a cell may be transfeπed from the PHY laver to the ATM layer only after the Rx_Cell_Handshake This functionalitv is important to avoid cell overflow or underflow in the ATU-C and ATM layers
There shall also be a duplex interface for operations, admmistration, maintenance (OAM) and control of the ADSL svstem 3 4 2 2 Payload transfer delav
The one-way transfer delav (excluding cell specific functionalities) tor pavload bits in all bearers (simplex and duplex) from the V reference point at central office end (V-C) to the T reference point at remote end (T-R) for channels assigned to the fast buffer shall be no more than 2 ms
For channels assigned to the interleave buffer it shall be no more than (4 + (S-l)/4 +S D/4) ms The same requirement applies in the opposite direction, from the T-R reference point to the V-C reference point 3.4.2.3. ATM Cell specific functionalities 3.4.2.3.1. Idle Cell Insertion
Idle cells shall be inserted in the transmitter direction for cell rate de-coupling. Idle cells are identified by the standardized pattern for the cell header given in ITU-T Recommendation 1.432. 3.4.2.3.2. Header Eπor Control (HEC) Generation.
The HEC byte shall be generated in the transmit direction as described in ITU-T Recommendation 1.432, including the recommended modulo 2 addition (XOR) of the pattern 01010101 to the HEC bits. The generator polynomial coefficient set used and the HEC sequence generation procedure shall be in accordance with ITU-T Recommendation 1.432.
3.4.2.3.3. Cell payload scrambling. Scrambling of the cell payload field shall be used in the transmit direction to improve the security and robustness of the HEC cell delineation mechanism. In addition, it randomizes the data in the information field, for possible improvement of the transmission performance. The self synchronizing scrambler polynomial X 3+l and procedures defined in ITU-T Recommendation 1.432 shall be implemented.
3.4.2.3.4. Bit timing and ordering When interfacing ATM data bytes to the ASO or ASl bearer channel, the most significant bit (MSB) shall be sent first. The ASO or ASl bearer channel data rates shall be integer multiples 32 kbit/s, with bit timing synchronous with the ADSL downstream timing base.
3.4.2.3.5 Cell Delineation.
The cell delineation function permits the identification of cell boundaries in the payload. It uses the HEC field in the cell header. Cell delineation shall be performed using a coding law checking the HEC field in the cell header according to the algorithm described in ITU-T Recommendation 1.432. The ATM cell delineation state machine is shown in Figure 59.
In the HUNT state, the delineation process is performed by checking bit by bit for the coπect HEC. Once such .an agreement is found, it is assumed that one header has been found, and the method enters the PRESYNC state. When byte boundaries are available within the receiving Physical Layer prior to cell delineation as with the framing modes 1, 2 and 3, the cell delineation process may be performed byte by byte. In the PRESYNC state, the delineation process is performed by checking cell by cell for the coπect HEC. The process repeats until the coπect HEC has been confirmed DELTA times consecutively. If an incorrect HEC is found, the process returns to the HUNT state. In the SYNC state the cell delineation will be assumed to be lost if an incoi * . t HEC is obtained ALPHA times consecutively. (With reference to LTU-T Recommendation 1.432, no recommendation is made for the values of ALPHA and DELTA as the choice of these values is not considered to effect interoperability. However, it should be noted that the use of the values suggested in ITU-T Recommendation 1.432 (ALPHA=1, DELTA=6) may be inappropriate due to the particular transmission characteristics of ADSL).
3.4.2.3.6 Header Eπor Control Verification
The HEC covers the entire cell header. The code used for this function is capable of either: single bit eπor coπection or multiple bit eπor detection. Eπor detection shall be implemented as defined in ITU-T Recommendation 1.432 with the exception that any HEC eπor shall be considered as a multiple bit eπor, and therefore, HEC eπor coπection shall not be performed. 3.4.2.4 Framing Structure for ATM transport
An ATU-C configured for ATM transport shall support the full overhead framing structures 0 and 1. The support of reduced overhead framing structures 2 and 3 is optional. The ATU-C transmitter shall preserve V-C interface byte boundaries (explicitly present or implied by ATM cell boundaries) at the U-C interface, independent of the U-C interface framing structure.
To ensure framing structure 0 interoperability between an ATM ATU-C and an ATM cell TC plus an STM ATU-R (i.e., ATM over STM). transporting ATM cells and not preserving T-R byte boundaries at the U-R interface shall indicate during initialization that frame structure 0 is the highest frame structure supported An STM ATU-R transporting ATM cells and preserving T-R byte boundaπes at the U-R mterface shall indicate during initialization that frame structure 0, 1 , 2 or 3 is the highest frame structure supported An ATM ATU-C receiver operating m frammg structure 0 can not assume that the ATU-R transmitter will preserve T-R mterface byte boundanes at the U-R mterface and shall therefore perform the cell delineation bit-by-bit
An ATU-C configured tor ATM transport may support insertion ol a Network Timmg Reference (NTR) 3 4 3 Network tinung reference (NTR) 3 4 3 1 Need for NTR
Some services require that a reference clock be available m the higher layers of the protocol stack (I e above the physical layer), this is used to guarantee end-to-end synchronization ot transmit and receive sides Examples are Voice and Telephony Over ATM (VTOA) and Desktop Video Conferencing (DVC)
To support the distπbution of a tinung reference over the network, the ADSL system may transport an 8 kHz timmg marker as NTR This 8 kHz timmg marker mav be used for voice/video playback at the decoder (D/A converter) m DVC and VTOA applications The 8 kHz timmg marker is mput to the ATU-C as part of the mterface at the V-C reference pomt 3 4 3 2 Transport of the NTR
The intention of the NTR transport mechanism is that the ATU-C provides timmg mformation at the U-C reference pomt to enable the ATU-R to deliver to the T-R reference pomt timing mformation that has a tinung accuracy coπespondmg to the accuracy of the clock provided to the V-C reference pomt If provided, the NTR shall be inserted m the U-C frammg structure as follows a) The ATU-C may generate an 8 kHz local tinung reference (LTR) by dividing its sampling clock by the appropnate integer (276 if 2 208 MHz is used), b) It shall transmit the change m phase offset between the mput NTR and LTR (measured In cycles of the 2 208 MHz clock, that is units of approximately 452 ns) from the previous superframe to the present one This shall be encoded mto four bits ntr3 - ntrO (with ntr3 the MSB), representing a signed integer in the -8 to +7 range m 2's-complement notation The bits ntr3-ntr0 shall be earned m the indicator bits 23 (ntr3) to 20 (ntrO), see Table 9 c) A positive value of the change of phase offset, D2f, shall indicate that the LTR is higher In frequency th.an the NTR d) Alternatively, the ATU-C may choose to lock its downstream sampling clock (2 208 MHz) to 276 tunes the NTR frequency, m that case it shall encode Δ2f to zero The NTR, as specified by ANSI Standard Tl 101, has a maximum frequency vanation of ±32 ppm The LTR has a maximum frequency vaπation of ±50 ppm The maximum mismatch is therefore ±82 ppm This would result m an average change of phase offset of approximately ± 3 5 clock cycles over one 17 ms superframe, which can be mapped mto 4 overhead bits
One method that the ATU-C may use to measure this change of phase offset is shown m Figure 60 3 4 4 Framing
This subclause specifies framing of the downstream signal (ATU-C transmitter) Two types of framing are defined full overhead and reduced overhead Furthermore, two versions of full overhead and t o versions ot reduced overhead are defined The four resulting framing modes are defined in Table 8, and shall be refeπed to as framing modes 0 1 , 2 and 3 Table 8- Definition of framing modes
Figure imgf000051_0001
Requirements for framing modes to be supported, depend upon the ATU-C being configured for either STM or ATM transport. The ATU-C shall indicate during initialization the highest framing structure number it supports. If the ATU-C indicates it supports framing structure A, it shall also support all framing structures A-l to 0. If the ATU-R indicates a lower framing structure number during initialization, the ATU-C shall fall back to the framing structure number indicated by the ATU-R. Outside the ASx LSx serial interfaces data bytes are transmitted MSB first in accordance with ITU-T Recommendations G.703, G.709, 1.361, and 1.432. All serial processing in the ADSL frame (e.g., CRC, scrambling, etc.) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB. As a result, the first incoming bit (outside world MSB) will be the first processed bit inside the ADSL (ADSL LSB). 3.4.4.1 Data symbols
Figures 53 and 54 show functional block diagrams of the ATU-C transmitter with reference points for data framing. Up to four downstream simplex data channels and up to three duplex data channels shall be synchronized to the 4 kHz ADSL DMT frame rate, and multiplexed into two separate data buffers (fast and interleaved). A cyclic redundancy check (CRC), scrambling, and forward eπor coπection (FEC) coding shall be applied to the contents of each buffer separately, and the data from the interleaved buffer shall then be passed through an interleaving function. The two data streams shall then be tone ordered, and combined into a data symbol that is input to the constellation encoder. After constellation encoding the data shall be modulated to produce an analog signal for transmission across the customer loop.
A bit-level framing pattern shall not be inserted into the data symbols of the frame or superframe structure. DMT frame (i.e. symbol) boundaries are delineated by the cyclic prefix inserted by the modulator. Superframe boundaries are determined by the synchronization symbol and shall also be inserted by the modulator, and which carries no user data.
Because of the addition of FEC redundancy bytes and data interleaving, the data frames (i.e. bit-level data prior to constellation encoding) have different structural appearance at the three reference points through the transmitter. As shown in Figures 53 and 54, the reference points for which data framing will be described in the following subclauses is: a) A (Mux data frame): the multiplexed, synchronized data after the CRC has been inserted b) B (FEC output data frame): the data frame generated at the output of the FEC encoder at the DMT symbol rate, where an FEC block may span more than one DMT symbol period c) C (constellation encoder input data frame): the data frame presented to the constellation coder. 3.4.4.1. 1 S uperframe structure ADSL uses the superframe structure shown in Figure 61. Each superframe is composed of 68 data frames, numbered from 0 to 67, which are encoded and modulated into DMT symbols, followed by a synchronization symbol, which carries no user or overhead bit-level data and is inserted by the modulator to establish superframe boundaries. From the bit-level and user data perspective, the DMT symbol rate is 4000 baud (period = 250 μs), but in order to allow for the insertion of the synchronization symbol the transmitted DMT symbol rate is 69/68*4000 baud Each data frame within the superframe contains data from the fast buffer and the mterleaved buffer During each ADSL superframe, eight bits shall be reserved for the CRC on the fast data buffer (crc0-crc7), and 24 mdicator bits (ιb0-ιb23) shall be assigned for OAM functions As shown m Figure 62, the synchronizaton byte of the fast data buffer ("fast byte") carπes the CRC check bits m frame 0 and the mdicator bits m frames 1, 34, and 35 The fast byte m other frames is assigned in even-/odd-frame pairs to either the EOC or to synchronization control of the bearer channels assigned to the fast buffer
Bit 0 of the fast byte in an even-numbered frame (other than frames 0 and 34) and bit 0 of the fast byte of the odd- numbered frame immediately following shall be set to "0" to indicate these frames carry a synchronization control information When they are not required for synchronization control, CRC, or indicator bits, the fast bytes of two successive ADSL frames, beginrung with an even-numbered frame, may contain mdications of "no synchronization action" or alternatively, they may be used to transmit one EOC message, consistmg of 13 bits The mdicator bits are defined in Table 9 Bit 0 of the fast byte m an even-numbered frame (other than frames 0 and 34) and bit 0 of the fast byte of the odd-numbered frame immediately following shall be set to "1 " to indicate these frames carry a 13-bit EOC message plus one additional bit, rl The rl bit is reserved for future use and shall be set to 1
Table 9 - Definition of indicator bits. ATU-C transmitter (fast data buffer, downstream direction)
Figure imgf000052_0001
Eight bits per ADSL superframe shall be used for the CRC on the interleaved data buffer (crcO - crc7) As shown in Figure 63 and Figure 65, the synchronization byte of the mterleaved data buffer ("sync byte") carnes the CRC check bits for the previous superframe m frame 0 In all other frames ( 1 through 67), the svnc byte shall be used for synchronization control of the bearer channels assigned to the interleaved data buffer or used to carry an ADSL overhead control (AOC) channel In the full overhead mode, when anv bearer channel appears in the interleave buffer, then the AOC data shall be earned m the LEX byte, and the svnc bvte shall designate when the LEX byte contains AOC data and when it contains data bytes from the bearer channel When no bearer channels are allocated in the interleaved data buffer (l e , all B (ASx) = B (LSx) = 0), the svnc byte shall carry the AOC data directly 344 1 2 Frame structure (with full overhead)
Each data frame shall be encoded mto a DMT symbol As is shown in Figure 61, each frame is composed of a fast data buffer and an mterleaved data buffer, and the frame structure has a different appearance at each of the reference pomts (A, B, and C) The bytes of the fast data buffer shall be clocked mto the constellation encoder first, followed by the bytes of the interleaved data buffer. Bytes are clocked least significant bit first
Each bearer channel shall be assigned to either the fast or the mterleaved buffer during initialization, and a pan of bytes, [BF,B ], transmitted for each bearer channel, where B and Bι designate the number of bytes allocated to the fast and mterleaved buffers, respectively
The seven [BF,Bj] pairs to specify the downstream bearer channel rates are Bp(ASx), Bj(ASx) for X = 0, 1, 2 and 3, for the downstream sunplex channels, BF(LSx), Bj(LSx) for X = 0, 1 and 2, for the (downstream transport of the) duplex channels
The rules for allocation are as follow
• For any bearer channel, X, (except the 16 kbit/s C channel option) either B (X) = the number of bytes per frame of the fast buffer and Bj (X) = 0, or BF(X) = 0 and Bι(X) = the number of bytes per frame of the mterleaved buffer
• For the 16 kbit/s C channel option, Bp(LS0) = 255 (11111111:) and Bj(LS0) =0, or B (LS0) = 0 and Bι(LS0) = 255.
34 4 1 2 1 Fast data buffer (with full overhead)
The frame structure of the fast data buffer shall be as shown m Figure 64, for reference pomts A and B, which are defined in Figure 53 and 54
The following shall hold for the parameters shown in Figure 64
Cp(LS0) = 0 fBpiLSO) = 255 (11111111
= Bp(LS0) otherwise NF = KF + RF where RF = number of FEC redundancy bytes, and
3 2
KF = } +B≠ASι) + AF + Cp LS0) + ∑ BpfLSj) + LF ι=0 j=\ where
AF = 0 if∑ Bp(ASι) = 0 for ι=0-3 ι=0 = 1 otherwise and
LF = 0 lfB/ASi) = 0/or i =0-3 and B/LSj) = Oforj =0-2 = 1 otherwise (including Bp(LS0) = 255)
At reference pomt A (Mux data frame) in Figure 53 and Figure 54 the fast buffer shall always contain at least the fast bvte This is followed by BF(AS0) bytes of channel ASO, then BF(AS1) bytes of channel ASl, B (AS2) bytes of channel AS2 and BF(AS3) bvtes of channel AS3 Next come the bvtes for any duplex (LSx) channels allocated to the fast buffer If any
BF(ASx) is non-zero, then both an AEX and a LEY byte follow the bytes of the last LSx channel, and if any BpfLSx) is non-zero, the LEX bvte shall be included When BF(LS0) = 255, no bytes are mcluded for the LSO channel Instead, the 16 kbit/s C channel shall be transported m every other LEX byte on average, usmg the svnc byte to denote when to add the LEX bvte to the LSO bearer channel
RF FEC redundancy bytes shall be added to the mux data frame (reference pomt A) to produce the FEC output data frame (reference pomt B), where Rp is given m the options used during initialization
Because the data from the fast buffer is not mterleaved, the constellation encoder mput data frame (reference pomt C) is identical to the FEC output data frame (reference pomt B) 3 44 1 2 2 Interleaved data buffer (with full overhead)
The frame structure of the mterleaved data buffer is shown in Figure 65 for reference pomts A and B, which are defined m Figure 53 and 54
The following shall hold for the parameters shown in Figure 65
C{(LS0)= 0 ι/Bj(LS0) = 255 (11111111 )
= B fLSO) otherwise Nj = (S x K, + Rι) /S, where Rj = number of FEC redundancy bytes and S = number of DMT symbols per FEC codeword
3 2
Kj = 1 + ∑ Bj(ASι) + A + Cj(LS0) + T BJ(LSJ) + L ι=0 7 = 1 where
AI = ° ' ∑ Bl(AS') = ° ι=0
1 otherwise and
Lj = 0 ιfBι(ASι) =0for i =0-3 and B,(LSj) = Oforj =0-2
= 1 otherwise (including Bι(LS0) = 255) At reference pomt A, the Mux data frame, the interleaved data buffer shall always contain at least the sync byte The rest of the buffer shall be built m the same manner as the fast buffer substituting B\ in place of βp The length of each mux data frame is A' bytes, as defined m Figure 65
The FEC coder shall take m S mux data frames and append Λj FEC redundancy bytes to produce the FEC codeword of length NFEC = x Λ'τ + Λ bytes The FEC output data frames shall contain Nj = NFEC IS bytes, where A' is an integer
When S > 1 , then for the S frames in an FEC codeword, the FEC output Data Frame (reference pomt B) shall partially overlap two mux data frames tor all except the last frame, which shall contain the Λj FEC redundancy bytes The FEC output data frames are mterleaved to a specified mterleave depth The interleaving process delays each bvte of a given FEC output data frame a different amount, so that the constellation encoder input data frames will contam bvtes from manv different FEC data frames At reference point A in the transmitter, mux data frame 0 of the interleaved data buffer is aligned with the ADSL superframe and mux data frame 0 ot the fast data buffer (this is not true at reference point C) At the receiver, the interleaved data buffer will be delayed bv (S interleave depth 250) ms with respect to the fast data buffer, and data frame 0 (containing the CRC bits for the interleaved data buffer) will appear a fixed number of frames after the beginning of the receiver superframe 3 44 1 3 Cyclic redundancy check (CRC)
Two cyclic redundancy checks (CRCs)-one for the fast data buffer and one for the mterleaved data buffer-shall be generated for each superframe and transmitted m the first frame of the following superframe Eight bits per buffer type (fast or mterleaved) per superframe allocated to the CRC check bits These bits are computed from the k message bits usmg the equation crc(D) = M(D) D8 modulo G(D). where
M(D) = moD -1 + ro/lλ"2 + . . + m^D + mk-l is the message polynomial, G(D) = D8 + D4 + D3 + D2 + 1 is the generating polynomial, crc(D) = cβD7 + cjD6 + .. + eg D + c is the check polynomial, and D is the delay operator That is, CRC is the remainder when M(D) ϋfi is divided by G(D). The CRC check bits are transported m the synchronization bytes (fast and mterleaved. 8 bits each) of frame 0 for each data buffer The bits (l e message polynomials) covered by the CRC mclude
Fast data buffer.
■ frame 0 ASx bytes (X = 0, 1 , 2, 3), LSx bytes (X = 0, 1 , 2), followed by any AEX and LEX bytes
■ all other frames fast byte, followed by ASx bytes (X = 0, 1, 2, 3), LSx bytes (X = 0, 1 , 2), and any AEX and LEX bytes
Interleaved data buffer
• frame 0 ASx bytes (X= 0, 1, 2, 3), LSx bytes (X= 0, 1, 2), followed by any AEX and LEX bytes
■ all other frames sync byte, followed by ASx bytes (X = 0, 1, 2, 3), LSx bytes (X = 0, 1 , 2 ), and any AEX and LEX bytes
Each byte shall be clocked into the CRC least significant bit first
The number of bits over which the CRC is computed vanes with the allocation of bytes to the fast and mterleaved data buffers (the numbers of bytes in ASx and LSx vary accordmg to the [B Bj] pairs, AEX is present m a given buffer onlv if at least one ASx is allocated to that buffer, LEX is present m a given buffer only if at least one ASx or one LSx is allocated to that buffer)
Because of the flexibility in assignment of bearer channels to the fast and interleaved data buffers, CRC field lengths over an ADSL superframe will vary from approximately 67 bytes to approximately 14,875 bvtes 3 44 2 Synchronization
If the bit tinung base of the input user data streams is not synchronous with the ADSL modem timing base, the mput data streams shall be synchronized to the ADSL tinung base usmg the synchronization control mechanism (consisting of synchronization control byte and the AEX and LEX bytes) Forward-eπor-coπection coding shall always be applied to the synchronization control byte(s)
If the bit timing base ot the input user data streams is synchronous with the ADSL modem timmg base, then the synchronization control mechanism is not needed, and the synchronization control bvte shall always indicate no synchronization action" (see Table 10 and Table 1 1 ) 3 4 4 2 1 Synchronization for the fast data buffer
Synchronization control for the fast data buffer may occur m frames 2 through 33, and 36 through 67 of an ADSL superframe, where the fast byte may be used as the synchronization control byte No synchronization action shall be taken for those frames for which the fast byte is used for CRC, fixed mdicator bits, or EOC
The format of the fast byte when used as synchronization control for the fast data buffer shall be as given m Table 10
Table 10- Fast bvte format for synchronization
Figure imgf000056_0001
ADSL deployments may need to inter-work with DSl (1 544 Mbit/s) or DSIC (3 152 Mbit/s) rates The synchronization control option that allows adding up to two bytes to an ASx bearer channel provides sufficient overhead capacity to transport combinations of DSl or DSIC channels transp.arently (without mterpretmg or stπppmg and regeneratmg the frammg embedded within the DSl or DSIC) The synchronization control algoπthm shall, however, guarantee that the fast byte in some minimum number of frames is available to carry EOC frames, so that a minimum EOC rate (4 kbit/s) may be maintained
When the data rate of the C channel is 16 kbit/s, the LSO bearer channel is transported in the LEX byte, usmg the "add LEX byte to designated LSx channel", with LSO as the designated channel, every other frame on average
If the bit timing base of the input bearer channels (ASx, LSx) is synchronous with the ADSL modem timing base, then ADSL systems need not perform synchronization control (by addmg or deleting AEX or LEX bytes to/from the designated ASx and LSx channels) In this case, the synchronization control byte shall indicate "no synchronization action" (1 e , sc7-0 coded "XX001 IXO2", with X discretionary) 3 4 4 2 2 Synchronization for the interleaved data buffer
Synchronization control for the interleaved data buffer can occur in frames 1 through 67 of an ADSL superframe, where the svnc bvte may be used as the synchronization control bvte No synchronization action shall be taken duπng frame 0, where the svnc bvte is used for CRC duπng frames when the LEX bvte carnes the AOC The format ot the sync bvte when used as synchronization control for the mterleaved data buffer shall be as given m Table 11 In the case where no signals are allocated to the mterleaved data buffer, the sync byte shall carry the AOC data directly, as shown m Figure 63
Table 11 - Svnc bvte format for synchronization
Figure imgf000057_0001
ADSL deployments may need to mter-work with DSl ( 1 544 Mbit s) or DSIC (3 152 Mbit/s) rates The synchronization control option that allows adding up to two bytes to an ASx bearer channel provides sufficient overhead capacity to transport combmations of DSl or DSIC channels transparently (without mterpretmg or shipping and regeneratmg the frammg embedded within the DSl or DSIC) When the data rate of the C ch.annel is 16 kbit/s, the LSO bearer channel is transported m the LEX bvte, usmg the
"add LEX byte to designated LSx channel", with LSO as the designated channel, every other frame on average
If the bit timmg base of the mput bearer channels (ASx, LSx) is synchronous with the ADSL modem timmg base then
ADSL systems need not perform synchronization control by adding or deleting AEX or LEX bytes to/from the designated ASx and LSx channels In this case, the synchronization control byte shall indicate "no synchronization action" In this case, when frammg mode 1 is used, the sc7-0 shall always be coded "XX001 IXX2", with X discretionary When scO is set to 1, the LEX byte shall carry AOC When scO is set to 0, the LEX byte shall be coded 00lo The scO may be set to 0 only in between transmissions of 5 concatenated and identical AOC messages
3 4 4 3 Reduced overhead framing
The format descnbed tor full overhead framing includes overhead to allow tor the synchronization of the seven ASx and LSx bearer channels When the synchronization function is not required, the ADSL equipment may operate in a reduced overhead mode This mode retains all the full overhead mode functions except synchronization control 3 4 4 3 1 Reduced overhead frammg with separate fast and svnc bvtes
The AEX and LEX bytes shall be eliminated from the ADSL frame format, and both the fast and sync bytes shall carry overhead information The fast byte cames the fast buffer CRC, mdicator bits, and EOC messages, and the sync byte carπes the mterleaved buffer CRC and AOC message The assignment of overhead functions to fast and sync bytes when usmg the full overhead frammg and when usmg the reduced overhead frammg with separate fast and svnc bvtes shall be as shown m Table 12
In the reduced overhead frammg with separate fast and sync bytes, the structure of the fast data buffer shall be as shown m Figure 64 with Ap and Lp set to 0 The structure of the mterleaved data buffer shall be as shown m Figure 65 with Aj and L set to 0
Table 12 - Overhead functions for frammg modes
Figure imgf000058_0001
NOTE - In the reduced overhead mode only "no synchronization action" code shall be used 3 44 3 2 Reduced overhead framing with merged fast and svnc bvtes
In the smgle latency mode, data is assigned to only one data buffer (fast or mterleaved) If data is assigned to only the fast buffer, then only the fast byte shall be used to carry overhead information If data is assigned only to the mterleaved buffer, then only the sync byte shall be used to carry overhead mformation Reduced overhead frammg with merged fast and svnc bytes shall not be used when operatmg in dual latency mode
For ADSL systems transporting data using a single data buffer (fast or interleaved), the CRC, indicator, EOC and
AOC function shall be earned in a smgle overhead byte assigned to separate data frames within the superframe structure The CRC remains m frame 0 and the indicator bits m frames 1, 34, and 35 The AOC and EOC bytes are assigned to alternate pairs of frames For ADSL equipment operatmg m single latency mode usmg the reduced overhead framing with merged fast and sync bytes, the assignment of overhead functions shall be as shown m Table 13
In the smgle latency mode usmg the reduced overhead frammg with merged fast and sync bytes, only one data buffer shall be used If the fast data buffer is used, the structure of the fast data buffer shall be as shown in Figure 64 (with Ap and Lp set to 0) and the mterleaved data buffer shall be empty (no sync byte and Kj = 0) If the interleaved data buffer is used, the structure of the interleaved data buffer shall be as shown m Figure 65 (with A and Lj set to 0) and the fast data buffer shall be empty (no fast bvte and Kp = 0) Table 13- Overhead functions for reduced overhead mode with merged fast and svnc bvtes
Figure imgf000059_0001
3.4.5 Scramblers
The bmary data streams output (LSB of each byte first) from the fast and interleaved data buffers shall be scrambled separately using the following algorithm for both: d'„ = dn θ d'„.ls θ rf 'n-23 where dn is the w-th output from the fast or interleaved buffer (i.e., input to the scrambler), and dn' is the n-th output from the coπesponding scrambler. This is illustrated in Figure 66.
These scramblers are applied to the serial data streams without reference to any framing or symbol synchronization. Descrambling in receivers can likewise be performed independent of symbol synchronization.
3.4.6 Forward eπor coπection
The ATU-C shall support downstream transmission with at least any combination of the FEC coding capabilities shown in Table 14.
Table 14 - Minimum FEC coding capabilities for ATU-C
Figure imgf000059_0002
The ATU-C shall also support upstream transmission with at least any combination of the FEC coding capabilities shown in Table 23. 3.4 6.1 Reed-Solomon coding
R (i.e , RF or R ) redundant check bytes eg , cj, .. , <r^_2 , Cβ_j shall be appended to A' (i.e., K / or SxA', ) message bytes mg, mj mK-2' mK-l t0 ^orm a Ree(l-Solomon code word of size N = K + R bytes The check bvtes are computed from the message bvte using the equation: C(D) = M(D) ^ modulo G(D) where M(D) = m0 D^'1 + mj D^'2 + + mκ_2 D + mκ,j is the message polynoπual,
C(D) = eg rfi-1 + cl ∑fi-2 + + cR_2 D + cR_} is the check polynomial, and
G(D) = P (D + a' ) is the generator polynomial of the Reed-Solomon code, where the mdex of the product runs from / = 0 to R-l That is, C(D) is the remainder obtamed from dividing M(D) Lft by G(D) The anthmetic is performed in the Galois Field GF(256), where a is a primitive element that satisfies the primitive bmary polynomial x + x^ + x^ + x^ + l A data byte (d , d ... , dj, d Q)
IS identified with the Galois Field element dj-a' + d( β +dja + g The number of check bytes is R, and the codeword
3 4 62 Reed-Solomon Forward Eπor Coπection Superframe Synchronization
When entering the SHOWTIME state after completion of Initialization and Fast Retrain, the ATU shall align the first byte of the first Reed Solomon code-word with the first data byte of DF 0 3 4 6 3 Interleaving
The Reed-Solomon codewords in the mterleaved buffer shall be convolutionally interleaved The interleaving depth vanes, but shall always be a power of 2 Convolutional interleaving is defined by the rule " Each of the N bytes BQ, B J , , B y_ j in a Reed-Solomon codeword is delaved by an amount that vanes linearly with the byte index More precisely, byte B, (with index l) is delayed by (D-l ) x i bytes, where D is the interleave depth"
An example for N = 5, D = 2 is shown in Table 15, where B) denotes the i-th byte of the7-th codeword
Table 15 - Convolutional interleaving example for N = 5. D = 2
Figure imgf000060_0001
With the above-defined rule, and the chosen interleaving depths (powers ot 2), the output bytes from the mterleaver always occupy distmct time slots when N is odd When N is even, a dummv byte shall be added at the beginning of the codeword at the mput to the mterleaver The resultant odd-length code-word is then convolutionallv interleaved, and the dummv bvte shall then removed from the output ot the interleaver 3 4 6 4 Support of higher downstream bit rates with S= 112
With a rate of 4000 data frames per second and a maximum of 255 bvtes (maximum R-S code-word size) per data frame, the ADSL downstream line rate is limited to approximately 8 Mbit/s per latency path The line rate limit can be increased to about 16 Mbit/s for the interleaved path by mappmg two RS code-words into one FEC data frame (l e , by using 5=1/2 in the interleaved path) 5=1/2 shall be used in the downstream direction only over bearer channel ASO When the Ki data bytes per mterleaved mux data frame cannot be packed mto one RS code-word, I e , Ki is such that Ki + R > 255, the Ki data bvtes shall be split mto two consecutive RS code-words When Ki is even, the first and second code-word have the same length Nil = N[2 = (Kι/2 + Ri), otherwise the first code-word is one byte longer than the second, I e first codeword has Nil = (Ki + 1 ))/2 + Ri bytes, the second code-word has N|2 = (Ki -1 ) 2 + Ri bytes For the FEC output data frame, Ni = Nil + N,2, with Nι< 511 bytes
The convolutional mterleaver requires all code-words to have the same odd length To achieve the odd code-word length, insertion of a dummy (not transmitted) byte may be required For S=l/2, the dummy byte addition to the first and/or second code- word at the mput of the mterleaver shall be as in Table 16
Table 16 - Dummy bvte insertion at interleaver mput for 5 = 1/2
Figure imgf000061_0001
3 4 7 Tone ordermg
A DMT time-domain signal has a high peak-to-average ratio (PAR) (its amplitude distnbution is almost Gaussian), and large values may be clipped by the digital-to-analog converter The eπor signal caused by clipping can be considered as an additive negative impulse for the time sample that was clipped The clippmg eπor power is almost equally distπbuted across all tones in the symbol in which clippmg occurs Clipping is therefore most likely to cause eπors on those tones that, m anticipation of a higher received SNR, have been assigned the largest number of bits (and therefore have the densest constellations) These occasional eπors can be reliably coπected by the FEC codmg if the tones with the largest number of bits have been assigned to the mterleave buffer
The numbers of bits and the relative gams to be used for every tone shall be calculated in the ATU-R receiver, and sent back to the ATU-C accordmg to a defined protocol The pans of numbers are stored, m ascending order of frequency (or tone number l), in a bit and gam table
The "tone-ordered" encodmg shall first assign the S*NF bits from the fast data buffer to the tones with the smallest number of bits assigned to them, and then the 8'Λ7/ bits from the interleave data buffer to the remaining tones
All tones shall be encoded with the number of bits assigned to them, one tone may therefore have a mixture of bits from the fast and mterleaved buffers The ordered bit table b't shall be based on the oπgmal bit table b, as follows
For λ = 0 to 15
From the bit table, find the set of all I with the number of bits per tone bt = A Assign b. to the ordered bit allocation table m ascending order of i A complementary de-ordeπng procedure should be performed m the ATU-R receiver It is not necessary, however, to send the results of the ordermg process to the receiver because the bit table was onginally generated m the ATU-R, and therefore that table has all the mformation necessary to perform the de-ordeπng
Figure 67 and Figure 68 show an example of tone ordeπng and bit extraction (without and with trellis codmg respectively) for a 6-tone DMT case, with Np=l and N =l for simplicity
3 4 8 Constellation encoder (Trellis code version)
Block processing of Wei's 16-state 4-dιmensιonal trellis code is optional in the ITU-T recommendation G 992 1 to unprove system performance An algonthnuc constellation encoder shall be used to construct constellations with a maximum number of bits equal to Aαownmax. where 8≤ Ndomuiux ≤ 15) 34 8 1 Bit extraction
Data bytes from the data frame buffer shall be extracted according to a re-ordered bit allocation table b'v least significant bit first Because of the 4-dιmensιonal nature of the code, the extraction is based on pans of consecutive b'χ, rather than on mdividual ones, as m the non-trellis-coded case Furthermore, due to the constellation expansion associated with codmg, the bit allocation table, b ',, specifies the number of coded bits per tone, which can be any mteger from 2 to 15 Given a pair (x,y) of consecutive b , x+y-1 bits (reflecting a constellation expansion of 1 bit per 4 dimensions, or one half bit per tone) are extracted from the data frame buffer These z = x+y-1 bits (tz , tz_j , , j ) are used to form the binary word « as shown m Table 17 The tone ordenng procedure ensures x y Single-bit constellations are not allowed because they can be replaced by 2-bit constellations with the same average energy Refer to 0 for the reason behind the special form of the word u for the case x = 0 , y > 1 Table 17 - Forming the binary word u
Figure imgf000062_0001
The last two 4-dιmensιonal symbols in the DMT symbol shall be chosen to force the convolutional encoder state to the zero state For each of these symbols, the 2 LSBs of u are pre-determined, and onlv (ΛΓ+V -3) bits shall be extracted from the data frame buffer and shall be allocated to t , 3 4 8 2 Bit conversion
The bmary word u=(uz',uz'_j, ,uj) determines two bmary words v=(v2>.v ,VQ) and f=(M' j, ,WQ), which are used to look up two constellation points m the encoder constellation table Tor the usual case of x >1 and v >1, z' = z = r-v-1 , and v and w contain x and v bits respectively For the special case of x = 0 and v > 1 , 2' = z+2 = y+l , v = (vj ,VQ) = 0 and w=(wv_ j , ,WQ) The bits («3,«2,«]) determine (VJ .VQ) and accordmg to Figure 69 The convolutional encoder shown in Figure 69 is a systematic encoder (I e i j and 2 are passed through unchanged) as shown in Figure 70 The convolutional encoder state (S3. S Sj SQ) are used to label the states ot the trellis shown in Figure 72 At the beginning of a DMT symbol penod the convolutional encoder state is initialized to (0, 0. 0, 0) The remaining bits of v and w are obtamed from the less significant and more significant parts of (M^, U^.J , ,U ), respectively When x >1 and v > 1, v = («z'. 2 > "z'-y+l1 » "4> vl> vθ) md w = (" - uzΑ' > uz'-y+3' w\< w0^ When -r = 0, the bit extraction and conversion algoπthms have been judiciously designed so that vj = VQ = 0 The binary word v is mput first to the constellation encoder, and then the bmary word w
In order to force the final state to the zero state (0,0,0,0), the 2 LSBs «j and κ2 of the final two 4-dιmensιonal symbols m the DMT symbol are constrained to u j = 5 j 53, and «2 = 52
3 4 8 3 Coset partition and trellis diagram
In a trellis code modulation system, the expanded constellation is labeled and partitioned mto subsets ("cosets") usmg a techmque called mappmg by set-partitionmg The four-dimensional cosets m Wei's code can each be wπtten as the union of two Cartesian products of two 2-dιmensιonal cosets For example, C40 = (C2" * C ' ) x (C- Cj ) The four constituent 2- dimensional cosets, denoted by C2 , C2 , 2 , C2 , are shown in Figure 71
The encoding algonthm ensures that the 2 least significant bits of a constellation point compnse the mdex 1 of the 2- dimensional coset C2' in which the constellation pomt lies The bits (vj, VQ) and (wj, WQ) are m fact the binary representations of this mdex
The three bits (U2,UJ,UQ) are used to select one of the 8 possible four-dimensional cosets The 8 cosets are labeled
C4' where 1 is the integer with binary representation (U^ ^ Q) The additional bit «3 (see Figure 69) determmes which one of the two Cartesian products of 2-dιmensιonal cosets m the 4-dιmensιonal coset is chosen The relationship is shown m Table 18 The bits (VJ,VQ) and (WJ.WQ) are computed from (K ,H2,MJ,KØ) using the linear equations given m Figure 69
Table 18 - Relation between 4-dιmensιonal and 2 -dimensional cosets
Figure imgf000064_0001
Figure 72 shows the trellis diagram based on the fimte state machine in Figure 70, and the one-to-one coπespondence between (u , «j, UQ) and the 4-dιmensιonal cosets In the figures, S=(S2,S2.Sl,Sry represents the cuπent state, while T = (T3,
T2, Tj, TQ) represents the next state m the finite state machine 5 is connected to T in the constellation diagram by a branch determined by the values of u and «j The branch is labeled with the 4-dιmensιonal coset specified by the values of u2, MJ
(and UQ = SQ , see Figure 71 ) To make the constellation diagram more readable, the mdices of the 4-dunensιonal coset labels are listed next to the starting and end pomts of the branches, rather than on the branches themselves The leftmost label corresponds to the uppermost branch for each state The constellation diagram is used when decodmg the trellis code by the Viterbi algoπthm 3 4 84 Constellation encoder
For a given sub-camer, the encoder shall select an odd-integer point (Λ' }") from the square-gnd constellation based on the b bits ot either {v^.j,v^.2, .VI .VQ} or {wj.],w^. , ,W] ,WQ} For convenience of descπption, these b bits are identified with an integer label whose binary representation is ( ^,.], ^_2, ,VJ,VQ), but the same encoding rules apply also to the w vector For example, for b=2, the four constellation pomts are labeled 0,1,2,3 coπesponding to (VJ .VQ) = (0,0), (0,1 ),
( 1 ,0), (1,1), respectively (v 0 is the first bit extracted from the buffer) 3 4 8 4 1 Even values of b
For even values of 6, the integer values A and Y of the constellation point (X,Y) shall be determined from the b bits f fc-l' yb-2< ' vl' vθ ^ foll°ws ^ and Y are the odd mtegers with twos-complement binary representations (v^.j, v6-3, , vj , 1 ) and (vj,. , v£_4, ,vrj, 1 ), respectively The most significant bits (MSBs), v^.j and v^, are the sign bits for X and Y respectively
Figure 74 shows example constellations for b = 2 and b= 4 (The values of X and Y shown represent the output of the constellation encoder These values require appropnate scalmg such that 1 ) all constellations regardless of size represent the same RMS energy and 2) by the fine gam scalmg before modulation by the IDFT )
The 4-bit constellation can be obtamed from the 2-bit constellation by replacmg each label n by a 2 x 2 block of labels as shown m Figure 74 The same procedure can be used to construct the larger even-bit constellations recursively
The constellations obtamed for even values of b are square m shape The least significant bits {vj, VQ} represent the coset labeling of the constituent 2-dιmensιonal cosets used m the 4-dιmensιonal Wei trellis code 3 4 842 Odd values of b. b = 3
Figure 75 shows the constellation for the case b = 3 (the values of X and Y shown represent the output of the constellation encoder These values require appropnate scalmg such that 1 ) all constellations regardless of size represents the same RMS energy and 2) by the fine gain scalmg before modulation by the IDFT) 3 4 8 4 3 Odd values of b. b>3
If b is odd and greater than 3 , the 2 MSBs of A' and the 2 MSBs of Y are determined by the 5 MSBs of the b bits Let c = (b+\ )l2, then X and Y have the twos-complement binary representations (Ar c^Yc.j,v^_4,v^^, ,v3,vj,l ) and Y^Y^^v^ 5>vb-7'vb-9' >v2'v0'l » wnere-*c ^ γc are *e ^S11 ^lts f " and Y respectively The relationship betweenA"^ ^.j, Yc Yc_j and v _j, v _2, , ^.-j is shown m the Table 19
Figure imgf000066_0001
Figure 76 shows the constellation for the case b = 5 (the X and Y values are on a ±1 , ±3, gπd The values of X and Y shown represent the output of the constellation encoder These values require appropnate scaling such that
1 ) all constellations regardless of size represents the same RMS energy and
2) bv the fine gain scaling before modulation bv the IDFT The 7-bit constellation shall be obtamed from the 5-bit constellation bv replacing each label n by the 2x2 block ol labels as shown m Figure 74
Again, the same procedure shall be used to construct the larger odd-bit constellations recursively Note also that the least sigmficant bits {vj , VQ} represent the coset labeling of the constituent 2-dιmensιonal cosets used m the 4-dιmensιonal Wei trellis code
3 4 9 Constellation encoder (No Trellis coding)
An algonthnuc constellation encoder shall be used to construct constellations with a maximum number of bits equal t0 downmax . where 8 ≤ Ndowπmax ≤ 15 The constellation encoder shall not use trellis codmg with this option 3 4 9 1 Bit extraction Data bits from the frame data buffer shall be extracted accordmg to a re-ordered bit allocation table b , least sigmficant bit first The number of bits per tone, b',, can take any non-negative integer values not exceedmg Λ/downmax. nd greater than 1 For a given tone b = b', bits are extracted from the data frame buffer, and these bits form a binary word {v^_ 1 'vb-2' 'vl >vθ} The ^"lrst Dlt extracted shall be vo , the LSB 3 4 10 Gam scalmg For the transmission ot data symbols gam scalmg, g, _ shall be applied as requested by the ATU-R and possibly updated dunng Showtime via a bit swap procedure Only values of g, equal to zero or within a range of approximately 0 19 to 1 33 (l e , -14 5 dB to +2 5 dB) may be used For the transmission of synchronization symbols, no gam scalmg shall be applied to any sub-earner
Each constellation point, (A" /,}',), l e complex number A" , +jY,, output from the encoder is multiplied by g, Z, = g, (X , + jY,)
3 4 11 Modulation 3 4 11 1 Sub-camers
The frequency spacing, Δ ", between sub-earners is 4 3125 kHz, with a tolerance of +/- 50 ppm 3 4 11 1 1 Data sub-earners The channel analysis signal allows tor a maximum of 255 earners (at frequencies nAf, n = 1 to 255) to be used The lower limit of n depends on both the duplexing and service options selected For example, tor ADSL above POTS service option, if overlapped spectrum is used to separate downstream and upstream signals, then the lower limit on n is determmed by the POTS splitting filters, if frequency division multiplexing (FDM) is used, the lower limit is set by the downstream-upstream separation filters 3 4 11 1 2 Pilot
Carner #Λ7 pα0t (/ilot = 4 3125 x NpUot kHz) shall be reserved for a pilot, that is b(Nftot) = 0 and g(Λ/pιiot) = 1 The data modulated onto the pilot sub-camer shall be a constant {0,0} Use of this pilot allows resolution of sample timing m a receiver modulo-8 samples Therefore a gross tinung eπor that is an integer multiple of 8 samples could still persist after a micro-interruption (e g , a temporary short- cucuit, open circuit or severe line hit), coπection of such timmg eπors is made possible by the use of the synchronization symbol 3 4 1 1 1 3 Nvquist frequency
The earner at the Nvquist trequencv (#256) shall not be used for user data and shall be real valued 3 4 1 1 1 4 DC
The earner at DC (#0) shall not be used, and shall contain no energy " 4 1 1 2 Modulation by the inverse discrete Founer transform (IDFT)
The modulating transform defines the relationship between the 512 real values x„ and the Z, 511 iπni xn = ∑ exP("^-) Z / for n = 0 to 511 ι=0
The constellation encoder and gam scalmg generate onlv 255 complex values of Z, In order to generate real values of xn the mput values (255 complex values plus zero at DC and one real value for Nyquist if used) shall be augmented so that the vector Z has Hermitian symmetry That is, Z, = conj (Z '512-1 ) for / = 257 to 511
3 4 11 3 Synchronization symbol
The synchronization symbol permits recovery of the frame boundary after micro-interruptions that might otherwise force retraining The data symbol rate, ^^ = 4 kHz, the earner separation, Δ/= 4 3125 kHz, and the IDFT size, N = 512, are such that a cyclic prefix of 40 samples could be used That is, (512 + 40) x 4 0 = 512 x 4 3125 = 2208
The cyclic prefix shall, however, be shortened to 32 samples, and a synchronization symbol (with a nominal length of 544 samples) is inserted after every 68 data symbols That is,
(512 + 32) x 69 = (512 + 40) x 68 The data pattern used m the synchronization symbol shall be the pseudo-random sequence PRD, (dn, for n = 1 to 512) defined bv d„ = 1 for w = 1 to 9 d„ = d„ @ d„.<) for n =10 to 512
The first pan of bits (d\ and </2) shall be used for the DC and Nyquist sub-earners (the power assigned to them is zero, so the bits are effectively ignored) The first and second bits of subsequent pans are then used to define the X, and Y, for l = 1 to 255 as shown m Table 20
Table 20 Mappmg of two data bits mto a 4QAM constellation
Figure imgf000068_0001
The peπod of the PRD is only 511 bits, so d512 shall be equal to dl The dl - d9 shall be re-initialized for each synchronization symbol, so each symbol uses the same data Bits 129 and 130, which modulate the pilot earner, shall be overwritten bv {0,0} generatmg the {+,+} constellation
The minimum set of sub-earners to be used is the set used for data transmission (l e , those for which b, > 0) The data modulated onto each sub-earner shall be as defined above, it shall not depend on which sub-earners are used 3 4 12 Cyclic prefix
The last 32 samples of the output of the IDFT ( xn tor n = 480 to 51 1 ) shall be prepended to the block of 512 samples and read out to the digital-to-analog converter (DAC) in sequence That is, the subscπpts, /) of the DAC samples m sequence are 480 51 1 ,0 511 3 4 13 Transmitter dynamic range
The transmitter mcludes all analog transmitter functions the D/A converter, the anti-aliasing filter, the hybπd circuitn .and the high-pass part of the POTS or ISDN splitter 3 4 13 1 Maximum clippmg rate
The maximum output signal of the transmitter shall be such that the signal shall be clipped no more than 0 00001% of the time
3 4 13 2 Noise/Distortion floor The signal to noise plus distortion ratio of the transmitted signal in a given sub-earner is specified as the ratio of the rms value of the tone in that sub-earner to the rms sum of all the non-tone signals m the 4 3125 kHz frequency band centered on the sub-earner frequency This ratio is measured for each sub-earner used for transmission usmg a MultiTone Power Ratio (MTPR) test as shown m Figure 77
Over the transmission frequency band, the MTPR of the transmitter m any sub-earner shall be no less than (3Λ'(]ow + 20)dB, where Λ owru ls defined as the size of the constellation (m bits) to be used on sub-earner i The minimum transmitter MTPR shall be at least 38dB (coπespondmg to an Λfaowni o 6) for anv sub-earner
Signals transmitted duπng normal initialization and data transmission cannot be used for this test because the DMT symbols have a cyclic prefix appended, and the PSD of a non-repetitive signal does not have nulls at any sub-earner frequencies A gated FFT-based analyzer could be used, but this would measure both the non-linear distortion and the linear distortion introduced by the transmit filter Therefore this test will requue that the transmitter be programmed with special software probably to be used dunng development only 3 5 ATU-R Functional Charactenstics
An ATU-R may support STM transmission or ATM transmission or both frammg modes that shall be supported, depend upon the ATU-R bemg configured for either STM or ATM transport If frammg mode k is supported, then modes k-1, , 0 shall also be supported
Dunng initialization, the ATU-C and ATU-R shall indicate a frammg mode number 0, 1, 2 or 3 which they mtend to use The lowest indicated frammg mode shall be used
An ATU-R may support reconstruction of a Network Tinung Reference (NTR) from the downstream mdicator bits 3 5 1 STM Transmission Protocol Specific functionalities 3 5 1 1 ATU-R input and output V interfaces for STM transport
The functional data interfaces at the ATU-R are shown m Figure 78 Output interfaces for the high-speed downstream sunplex b&arer channels are designated ASO through AS3, input-output interfaces for the duplex bearer channels are designated LSO through LS2 There may also be a functional mterface to transport operations, admmistration and mamtenance (OAM) indicators from the CI to the ATU-R, this mterface may physicallv be combined with the LSO interface 3 5 1 2 Downstream simplex channels - Transceiver bit rates
The sunplex channels are transported m the downstream dnection onlv, therefore their data interfaces at the ATU-R operate only as outputs 3 5 1 3 Duplex channels - Transceiver bit rates
The duplex channels are transported in both directions, so the ATU-R shall provide both input and output data interfaces
3 5 1 4 Frammg Structure for STM transport
An ATU-R configured for STM transport shall support the full overhead frammg structure 0 The support of full overhead frammg structure 1 and reduced overhead framing structures 2 and 3 is optional
Preservation of T-R interface bvte boundaπes (if present) at the U-R interface mav be supported for anv ot the U-R mterface framing structures
An ATU-R configured lor STM transport mav support reconstruction ol a Network Timing Reference (NTR) 3 5 2 ATM Transport Protocol Specific functionalities
3 5 2 1 ATU-R mput and output V interfaces for ATM transport
The ATU-R mput and output T interfaces are identical to the ATU-C mput and output interfaces, as shown m Figure 79 3 5 2 2 ATM Cell specific functionalities
The ATM cell specific functionalities performed at the ATU-R shall be identical to the ATM cell specific functionalities performed at the ATU-C 3 5 2 3 Frammg Structure for ATM transport
An ATU-R configured for ATM transport shall support the full overhead framing structures 0 and 1 The ATU-R transmitter shall preserve T-R interface byte boundaπes (explicitly present or implied by ATM cell boundanes) at the U-R mterface, mdependent of the U-R mterface frammg structure
An ATU-R configured for ATM transport may support reconstruction of a Network Timmg Reference (NTR) To ensure frammg structure 0 interoperability between an ATM ATU-R and an ATM cell TC plus an STM ATU-C (i e , ATM over STM), the following shall apply • An STM ATU-C transporting ATM cells and not preserving V-C byte boundaπes at the U-C mterface shall indicate duπng initialization that frame structure 0 is the highest frame structure supported,
• An STM ATU-C transporting ATM cells and preserving V-C byte boundaπes at the U-C mterface shall indicate dunng initialization that frame structure 0, 1, 2 or 3 is the highest frame structure supported, as applicable to the implementation, • An ATM ATU-R receiver operatmg m frammg structure 0 can not assume that the ATU-C transmitter will preserve
V-C mterface byte boundaπes at the U-C mterface and shall therefore perform the cell delineation bit-by-bit 3 5 3 Network timing reference
If the ATU-C has indicated that it will use mdicator bits 20 to 23 to transmit the change of phase offset, the ATU-R may deliver the 8 kHz signal to the T-R interface 3 5 4 Framing
Frammg of the upstream signal (ATU-R transmitter) closelv follows the downstream framing (ATU-C transmitter), but with the following exceptions
• There are no ASx channels and no AEX byte,
• A maximum of three channels exist, so that only three Bp, B\ pairs are specified, • The minimum RS FEC codmg parameters and mterleave depth differ (see Table 23),
• Four bits of the fast and sync bytes are unused (coπespondmg to the bit positions used by the ATU-C transmitter to specify synchronization control for the ASx channels) (see Table 21 and Table 22)
• The four mdicator bits for NTR transport are not used m upstream duection
Two types of frammg are defined full overhead and reduced overhead Furthermore, two versions of full overhead and two versions of reduced overhead are defined The four resultmg frammg structures are defined as for the ATU-C and are refened to as frammg structures 0, 1 , 2 and 3
Requirements for frammg structures to be supported, depend upon the ATU-R being configured for either STM or ATM transport
Outside the ASx/LSx seπal interfaces data bytes are transmitted MSB first in accordance with ITU-T Recommendations G 703, G 707, 1 361, and 1 432 All seπal processing in the ADSL frame (e g , CRC, scrambling, etc ) shall, however, be performed LSB first, with the outside world MSB considered by the ADSL as LSB As a result, the first mcommg bit (outside world MSB) will be the first processed bit inside the ADSL (ADSL LSB) 3 5 4 1 Data symbols
The ATU-R transmitter is functionally similar to the ATU-C transmitter, except that up to three duplex data channels are synchronized to the 4 I Hz ADSL DMT symbol rate (uistead of up to four sunplex and three duplex channels as is the case for the ATU-C) The ATU-R transmitter and its associated reference pomts for data fra mg are shown m Figure 55 and Figure 56
3 5 4 1 1 Superframe structure
The superframe structure of the ATU-R transmitter is identical to that of the ATU-C transmitter, shown in Figure 61
The ATU-R shall support the mdicator bits The mdicator bits, ιb20-23, shall not transport NTR in the upstream direction and shall be set to 1 3 5 4 1 2 Frame structure (with full overhead)
Each data frame shall be encoded mto a DMT symbol As specified for the ATU-C shown m Figure 61 , each frame is composed of a fast data buffer and an interleaved data buffer, and the frame structure has a different appearance at each of the reference points (A, B, and C) The bytes of the fast data buffer shall be clocked into the constellation encoder first, followed by the bytes of the interleaved data buffer Bytes are clocked least sigmficant bit first The assignment of bearer channels to the fast and interleaved buffers shall be configured dunng initialization with the exchange of a (B βi) pan for each data stream, where 5p designates the number of bytes of a given data stream to allocate to the fast buffer, and Bj designates the nuπώer of bytes allocated to the mterleaved data buffer
The three possible (Bγβι) paus are Bp(LSx), 4-?i(LSx) for X = 0, 1 and 2, for the duplex channels, they are specified as for the ATU-C 3 54 1 2 1 Fast data buffer
The frame structure of the fast data buffer is the same as that specified for the ATU-C with the following exceptions
• ASx bytes do not appear,
• The AEX byte does not appear,
The following shall hold for the parameters shown in Figure 80 CF(LS0) = 0 ιfBF(LS0) = 255 (l l l l l l l l )
= BF(LS0) otherwise
LF = 0 ιfB LS0) = Bp(LSl) = BF(LS2) = 0
= 1 otherwise KF = 1 + Cp(LS0) + BF(LS 1 ) + B LS2) + LF Np = KF + Rp where Rp = number of upstream FEC redundancy bytes m fast path
At reference point A (the mux data frame) in Figure 55 and Figure 56 the fast buffer always contains at least the fast byte This is followed by βp(LSO) bytes of channel LSO, then Bγ( LSI) bytes of channel LSI, and βp(LS2) bytes of channel LS2, and if any βp(LSx) is non-zero, a LEX byte When £F(LS0) = 255 (111111112), no separate bvtes are included for the LSO channel Instead, the 16 kbit/s C channel shall be transported in everv other LEX byte on average, using the synchronization byte to denote hen to add the LEX bvte to the LSO bearer channel
Λp FEC redundancv bvtes shall be added to the mux data frame (reference point A) to produce the FEC output data frame (reference point B) where R is given in the C-RATESl signal options received from the ATU-C dunng initialization Because the data from the fast data buffer is not mterleaved, the constellation encoder mput data frame (reference pomt C) is identical to the FEC output data frame (reference pomt B) 3 5 4 1 2 2 Interleaved data buffer
The frame structure of the mterleaved data buffer is shown in Figure 81 for the three reference pomts that are defined in Figure 55 and Figure 56 This structure is the same as that specified for the ATU-C, with the following exceptions
• ASx bytes do not appear,
• the AEX byte does not appear,
The following shall hold for the parameters shown m Figure 81 C,(LS0) = 0 if B^LSO) = 255 (1111111 lj) = 8^80) 00161^86
Li = O ιfB,(LS0) = Bι(LSl) = Bι(LS2) = O
= 1 otherwise Ki = 1 + C,(LS0) + Bι(LSl) + Bι(LS2) + Li
N, = ( S ' Kj + Rr y S where Rj = number of upstream FEC redundancy bytes m mterleaved path and S = number of mux data frames per FEC codeword
3 5 4 1 3 Cyclic redundancy check (CRC)
Two cyclic redundancy checks (CRCs) - one for the fast data buffer and one for the mterleaved data buffer - are generated for each superframe and transmitted in the first frame of the following superframe Eight bits per buffer type (fast or mterleaved) per superframe are allocated to the CRC check bits These bits are computed from the k message bits usmg the equation crc(D) = (D) D8 modulo G D), where
Figure imgf000072_0001
is the message polynomial,
GYD) = D8 + D4 + D3 + D2 + is the generatmg polynomial, crc(D) = co D7 + c\O6 + + c6 D + cη The CRC bits are transported m the fast byte (8 bits) of frame 0 in the fast data buffer, and the sync byte (8 bits) of frame 0 in the interleaved data buffer The bits covered by the CRC mclude,
• for the fast data buffer
■ frame 0 LSx bytes (A = 0, 1, 2), followed by the LEX bvte,
■ all other frames fast byte, followed by LSx bytes (A" = 0, 1 , 2), and LEX byte
• for the mterleaved data buffer ■ frame 0 LSx bvtes (X = 0, 1 , 2), followed by the LEX byte,
• all other frames sync byte, followed by LSx bytes (X = 0, 1, 2), and LEX byte
Each byte shall be clocked into the CRC least sigmficant bit first
The CRC-generating polynomial, and the method of generatmg the CRC bvte are the same as for the downstream data 3 54 2 Synchronization
If the bit tinung base of the mput user data streams is not synchronous with the ADSL modem tinung base the mput data streams shall be synchronized to the ADSL tinung base usmg the synchronization control mechamsm (consisting of synchronization control byte and the LEX byte) Forward-eπor-coπection codmg shall always be applied to the synchronization control byte(s)
If the bit tuning base of the mput user data streams is synchronous with the ADSL modem tinung base then the synchronization control mechamsm is not needed The synchronization control byte shall always indicate "no synchronization action" 3 5 4 2 1 Synchronization for the fast data buffer
Synchronization control for the fast data buffer can occur m frames 2 through 33 and 36 through 67 of an ADSL superframe, where the fast byte may be used as the synchronization control byte No synchronization action is to be taken for those frames in which the fast byte is used for CRC, fixed mdicator bits, or EOC The format of the fast byte when used as synchronization control for the fast data buffer shall be as given m Table 21
In the case where no signals are allocated to the interleaved data buffer, the sync byte cames the AOC data dnectly as shown in Figure 63
Table 21 - Fast bvte format for synchronization
Figure imgf000073_0001
If the bit timing base of the mput bearer channels (LSx) is synchronous with the ADSL modem tinung base then ADSL systems need not perform synchronization control by adding or deleting LEX bytes to/from the designated LSx channels The synchronization control byte shall indicate "no synchronization action" (1 e , sc7-0 coded "00001 IXO2", with X discretionary)
When the data rate of the C channel is 16 kbit/s, the LSO bearer channel shall be transported in the LEX byte, usmg the "add LEX bvte to designated LSx channel", with LSO as the designated channel, every other frame on average 3 5 4 2 2 Synchronization for the interleaved data buffer Synchronization control for the mterleaved data buffer can occur m frames 1 through 67 of an ADSL superframe, where the sync bvte mav be used as the synchronization control byte No synchronization action shall be taken duπng frame 0, where the svnc bvte is used for CRC, and frames when the LEX bvte carπes the AOC
The format of the svnc byte when used as synchronization control for the interleaved data buffer shall be as given m Table 22 hi the case where no signals are allocated to the interleaved data buffer, the sync byte shall carry the AOC data directly, as shown in Figure 63 Table 22 - Svnc bvte format for synchronization
Figure imgf000074_0001
When the data rate ot the C channel is 16 kbit/s, the LSO bearer channel shall be transported in the LEX bvte, usmg the "add LEX byte to designated LSx channel", with LSO as the designated channel, every other frame on average
If the bit timmg base of the input bearer channels (LSx) is synchronous with the ADSL modem timing base then ADSL systems need not perform synchronization control by addmg or deleting LEX bytes to/from the designated LSx channels, and the synchronization control byte shall mdicate "no synchronization action" In this case, and when frammg structure 1 is used, the sc7-0 shall always be coded "00001 IXX2", with X discretionary When scO is set to 1, the LEX byte shall carry AOC When scO is set to 0, the LEX byte shall be coded OO16 The scO may be set to 0 only m between transmissions of 5 concatenated and identical AOC messages 3 5 4 3 Reduced overhead framing
The format descπbed in 2 5 4 1 2 for full overhead frammg includes overhead to allow for the synchronization of three LSx bearer channels When the synchronization function descnbed m 2 5 4 2 is not requued, the ADSL equipment mav operate in a reduced overhead mode This mode retains all the full overhead mode functions except synchronization control When usmg the reduced overhead frammg, the frammg structure shall be as defined m 2 4 4 3 1 (when using separate fast and sync bytes) or2 4 4 3 2 (when using merged fast and sync bytes) 3 5 5 Scramblers
The data streams output from the fast and interleaved buffers shall be scrambled separately usmg the same algoπthm as for the downstream signal
3 5 6 Forward eπor coπection
The upstream data shall be Reed-Solomon coded and mterleaved using the same algoπthm as for the downstream data
The ATU-R shall support upstream transmission with at least anv combination of the FEC coding capabilities shown in Table 23 Table 23 - Mimmum FEC coding capabilities for ATU-R
Figure imgf000075_0001
The ATU-R shall also support upstream transmission with at least any combination of the FEC codmg capabilities shown m Table 14 3 5 7 Tone ordermg
The tone-ordering algoπthm shall be the same as for the downstream data 3 5 8 Constellation encoder - Trellis version
Block processing of Wei's 16-state 4-dιmensιonal trellis code to improve system performance is optional An algonthπuc constellation encoder shall be used to construct constellations with a maximum number of bits equal to Λupm x, where 8 < jVupmax ≤ 15
The encodmg algoπthm shall be the same as that used for downstream data (with the substitution of the constellation limit of Aiipmax r Λdownmax)
3 5 9 Constellation encoder - Uncoded version An algoπthnuc constellation encoder shall be used to construct constellations with a maximum number of bits equal to Λ'upmax > where 8 < Λ max ≤ 15 The encoding algoπthm is the same as that used for downstream data (with the substitution of the constellation limit of A^pmax for Ndovmmax) The constellation encoder shall not use trellis coding with this option
3 5 10 Gam scaling For the transmission of data symbols gam scaling, g, _ shall be applied as requested by the ATU-C and possibly updated duπng Showtime via the bit swap procedure Only values ot g, equal to 0 or within a range of approximately 0 19 to
1 33 (l e , -14 5 dB to +2 5 dB) may be used For the transmission of synchronization symbols no gam scalmg shall be applied to anv sub-camer
Each constellation pomt, (X,,Yι), ι e complex number, A' , +jYι, output from the encoder is multiplied by g, Z, = g, (Xl +JY,)
3 5 11 Modulation
Frequency spacing, Δ/~, between sub-camers shall be 4 3125 kHz with a tolerance of +/- 50 ppm
3 5 1 1 1 Sub-camers
3 5 1 1 1 1 Data sub-camers The channel analysis signal allows for a maximum of 31 earners (at frequencies nAf) to be used The range of n depends on the service option selected For example, for ADSL above POTS the lower limit is set bv the POTS/ADSL splitting filters, the upper limit is set bv the transmit and receive band-limiting filters, and shall be no greater than 31 The cut-off trequencies of these filters are at the discretion ot the manufacturer because the range ot usable n is determined dunng the channel estimation 3 5 11 1 2 Nvquist frequency
The sub-earner at the Nyquist frequency shall not be used for user data and shall be real valued 3 5 11 1 3 DC
The sub-earner at DC (#0) shall not be used, and shall contain no energy 3 5 11 2 Synchronization symbol
The synchronization symbol permits recovery of the frame boundary after micro-interruptions that might otherwise force retraining
The data symbol rate, fsymb - 4 kHz, the sub-camer separation, Δ/ = 4 3125 kHz, and the IDFT size, N = 64, are such that a cyclic prefix of 5 samples could be used That is, (64 + 5) * 4 0 = 64 * 4 3125 = 276
The cyclic prefix shall, however, be shortened to 4 samples, and a synchronization symbol (with a nominal length of 68 samples) inserted after every 68 data symbols That is,
(64 + 4) * 69 = (64 + 5) * 68 The minimum set of sub-camers to be used is the set used for data transmission (I e , those for which b, > 0), sub-camers for which b, = 0 may be used at a reduced PSD The data modulated onto each sub-camer shall be as defined above, it shall not depend on which sub-earners are used 3 5 12 Transmitter dynamic range
The transmitter includes all analog transmitter functions the D/A converter, the anti-aliasing filter, the hybπd circuitry, and the POTS splitter 3 5 12 1 Maximum clipping rate
The maximum output signal of the transmitter shall be such that the signal shall be clipped no more than 0 00001% of the time
3 5 12 2 Noise Distortion floor
The signal to noise plus distortion ratio of the transmitted signal m a given sub-camer ((S/N+D),) is specified as the ratio of the rms value of the full-amplitude tone m that sub-camer to the rms sum of all the non-tone signals in the 4 3125 kHz frequency band centered on the sub-camer frequency This ratio is measured for each sub-camer used for transmission usmg a Multi-Tone Power Ratio (MTPR) test as shown m Figure 77
Over the transmission frequency band, the MTPR of the transmitter in any sub-camer shall be no less than (3 upι + 20) dB, where NUpχ is defined as the size of the constellation (in bits) to be used on sub-camer / The transmitter MTPR shall be +38dB (coπesponding to an Λ7 Up1 of 6) for any sub-camer
Signals transmitted dunng normal initialization and data transmission cannot be used for this test because the DMT symbols have a cyclic prefix appended, and the PSD of a non-repetitive signal does not have nulls at any sub-camer frequencies A gated FFT-based analyzer could be used, but this would measure both the non-linear distortion and the linear distortion introduced by the transmit filter Therefore this test will requue that the transmitter be programmed with special software, probably to be used dunng development only
4 - Use of SMCCC and PMCCC for wired communications as xDSL
We now descnbe the use of SMCCC and PMCCC for xDSL systems and. m particular, apply it to the case of ADSL transceivers The use tor other xDSL systems and other data communications svstems is straightforward It should be noted that the appended claims are not intended to be limited to the particular application of ADSL An SMCCC is formed bv two (or more) constituent systematic encoders joined through an interleaver The mput information bits feed the first encoder and, after having been scrambled bv the mterleaver, enter the second encoder A code word of a seπal concatenated code compnses of the input bits to the first encoder followed bv the pantv check bits of both encoders SMCCC achieves near-Shannon-limit eπor coπection performance Here we descπbe the proposed encoder, the decoder and some simulation results
4 1- Parallel Multiple Convolutional Concatenated Codes
A PMCCC encoder is formed bv two (or more) constituent systematic encoders jomed through one or more interleavers The mput mformation bits feed the first encoder and, after having been scrambled bv the mterleaver, enter the second encoder A code word of a parallel concatenated code compnses of the mput bits to the first encoder followed by the paπty check bits of both encoders Here we present the proposed encoder, the decoder and some simulation results The disadvantage of the PMCCC is that it has a floor-error around 10"* This could be improved with a good mterleaver design, but usmg a large number of iterations 4 2 1- Parallel Multiple Convolutional Concatenated Codes Encoder
A PMCCC encoder compnses ot two parallel concatenated recursive systematic convolutional encoders separated by an mterleaver The encoders are arranged m a "parallel concatenation" In a prefeπed embodiment, the concatenated recursive systematic convolutional encoders may be identical
Figure 82 represents the proposed encoder The mput is a block of mformation bits The two encoders generate paπty symbols (uo and u O) from two simple recursive convolutional codes The key innovation of this techmque is an interleaver "r", which permutes the oπgmal information bits before mput to the second encoder The permutation performed by the mterleaver allows those input sequences for which one encoder produces low-weight codewords to usually cause the other encoder to produce high-weight codewords Thus, even though the constituent codes are individually weak, the combmation is surpπsmgly powerful The resultmg code has features similar to a "random" block code In this way, we have the information symbols («/ and u2) and two redundant symbols (uo and u O) With this redundancy it is possible to reach longer loops and to reduce the PAR, at the cost of a slight mcrease of the constellation encoder
In the Figure 83 we have presented the conversion that we propose, taking mto account the new panty bit
4 2 2- Parallel Multiple convolutional Concatenated Codes Decoder In Figure 84 we present the decoder, that uses an iterative techmque, with two soft-decision input/output trellis decoder in each decodmg state The Maximum-a-Posteπon (MAP) Trellis decoder provides the soft output result suitable for
PMCCC decodmg
The first decoder should deliver a soft output to the second decoder The loganthm of the Likelihood Ratio (LLR) of a bit decision is the soft decision mformation output by the MAP decoder Let uk be the binary random vaπable taking values in {0,1}, representing the sequence of mformation bits u=(uι, u„) The optimum decision algonthm on the kth bit Uk is based on the conditional log-hkelihood ratio Lk
Figure imgf000077_0001
where P(uJ are the a pπoπ probabilities
Usmg Bayes' rule and the following approximation
Figure imgf000077_0002
The MAP goπthm approximates a nonseparable distπbution with a separable one It is possible to separate P(u\yJ
Figure imgf000077_0003
Lk = f(yl,L0,L2.k)+L0k + L2k (HO)
For binary modulation.
Figure imgf000078_0001
and sunilarly
Lk = f(y2,Lo,Ll,k)+Lok + Llk (113)
Figure imgf000078_0002
A soluϋon to this equaϋon is
Llk = f(y,,Lo,L2,k) (115) L2k=f(y2,L0,L,,k) (116) for k=l,2,...,n The final decision is based on.
Figure imgf000078_0003
which is passed through a hard hπuter with zero threshold
The nonlinear equations can be solve using the iterative procedure L£1> = a(r>f(yl.L0,'iV,k) (118) l ') = a<?>f(y2 ,Lo.L >.k) (119)
The recursion can be started with the initial condiϋon
Zf = Z = Z, (120) For each iteration alm) and afm> can be optimized or set to 1 for simplicity 423 Design of the interleaver for PMCCC
In a PMCCC the terleaver establishes a relationship between portions of a codeword It is generally assumed that when a PMCCC decoder is operatmg at low bit eπor rates, eπor sequences have small Hamnung weights From this, and properties of PMCCC, a mathematical structure is possible to developed for interleaver design, pemuttmg the identification of quantitatively optimal mterleaver Simulations show the math captures some but not all the essential charactenstics of a successful interleaver Modifying a random mterleaver accordmg to some mathematical ideas gives excellent simulation results
The function of the interleaver in the PMCCC is to assure that at least one of the codeword components has high Hamming weight For a better PMCCC, we can design an interleaver of permutation length p that maximizes the mmimum Hamming weight generated bv weight two mputs This requires maximizing σ* ≡ nun
Figure imgf000078_0004
1 < ι,j ≤ p (121)
'j where π is the interleaver function It is also possible to replace the sum with the maximum of sπ ≡ nun \ J - i I v | π (j) - π ( 1 ) I 1 ≤ ι,j < p (122) • i
An alternate method for mterleaver design is to disperse symbols as widely as possible in a 'constellation way" One effective method is to choose si and s: and generate π one pomt at a tune For each i e [l,pj, taken sequentially, random values are considered for π(ι) until one is found satisfying for sπ=sι In Figure 85, it is shown how in curve d the floor eπor can be avoided usmg this method
The constellation ot the mterleaver used to obtain curve "d" is presented in Figure 86
4 2 4- Simulations
Simulations with
(a) two equal, recursive convolutional consistent codes (b) with 16 states,
(c) mterleaver of length 4096 and 16384
(d) usmg S-random permutation with S=31 and S=40
(e) running each simulation at least 25 Mbits, shows that the decodmg algonthm converges down to BER=105 at Eι N0 below 1 dB with nine iterations 4 3 Seπal Multiple Concatenated Convolutional Codes 4 3 1 Encoder
An SMCCC Encoder compnses of two seπal concatenated recursive systematic convolutional encoders separated by an mterleaver The encoders are aπanged in a "seπal concatenation" The concatenated recursive systematic convolutional encoders are identical Figure 87 represents the proposed encoder A SMCCC encoder is a combmation of two simple encoders The mput is a block of information bits The two encoders generate paπty symbols (uo and u O) from two simple recursive convolutional codes The key mnovation of this techmque is an mterleaver "r" , which permutes the oπgmal mformation bits before mput to the second encoder The permutation allows those mput sequences for which one encoder produces low-weight codewords which will usually cause the other encoder to produce high-weight codewords Thus, even though the constituent codes are individually weak, the combination is surpnsingly powerful The resulting code has features similar to a "random" block code In this way, we have the information symbols («y and «?) and two redundant symbols (uo and u 'o) With this redundancy it is possible to reach longer loops and to reduce the peak to average ratio (PAR) , at the cost of a slight mcrease ot the constellation encoder
In Figure 83 we present the conversion that we propose, taking into account the new paπty bit 4 3 2 Decoder
In Figure 88, the block diagram of an iterative decoder is shown It is based on two modules denoted by "SISO" one for each encoder, an mterleaver, and a demterleaver The SISO module is a four-port device, with two mputs and two outputs
It accepts as mputs the probability distπbutions of the information and code symbols labeling the edges of the code trellis, and forms as outputs an update of these distnbutions based upon the code constraints The updated probabilities of the input and code symbols are used in the decoding procedure
The SISO module is a four-port device that accepts at the input the sequences of probability distnbutions and outputs the sequences of probability distπbutions based on its mputs and on its knowledge of the code The output probability distπbutions represent a smoothed version of the mput distπbutions The algoπthm is completely general and capable ot copmg with parallel edges and also with encoders with rates greater than one, like those encountered in some concatenated schemes
The SISO algoπthm requires that the whole sequence has been received before starting the smoothing process The reason is that backward recursion starts trom the final trellis state A more flexible decoding strategy is offered bv modifvmg the algoπthm m such a way that the SISO module operates on a fixed memory span and outputs the smoothed probability distnbutions after a given delay, D This new algonthm is called the shding-window soft-input soft-output (SW-SISO) algonthm
The SW-SISO algoπthm solves the problems of continuously updating the probability distπbutions, without requiring trellis temunations Theu computaUonal complexity m some cases is around 5 tunes that of other suboptunal algonthms like SOVA This is due mainly to the fact that they are multiplicative algoπthms In this section, we overcome this drawback by proposing the additive version of the SISO algonthm 4 3 3 Interleaver design
SMCCC does not have a problem with floor eπors as does PMCCC The floor eπor begins after 10"7 that made it suitable for ADSL applications In an SMCCC the mterleaver establishes a relationship between portions of a codeword For a good SMCCC, we can design an mterleaver of permutation length " p " that maximizes the minimum Hamnung weight generated by weight two mputs In an SMCCC, the mterleaver establishes a relationship between portions of a code-word In the SMCCC case, because one of the mputs come from the outer encoder, the roll of the mterleaver is not so cntical, for this reason the method proposed for the mterleaver is to disperse symbols as widely as possible in a "constellation way" One effective method is to choose for each i e fl.pj π(ι) =p/3 *ι An example of this method is show m Figure 89 4 3 4- Simulations
Simulations with two equal, recursive convolutional consistent codes with 16 states and an mterleaver of length between 100 and 1000 usmg S-random permutation, and each simulation run examined at least 25 Mbits show that the decodmg algoπthm converges down to BER=107 at Ei No of below 1 dB with less tlian mne iterations 4 4 The number of iterations m the decoder
The number of iterations is a very important subject for the different applications of PMCCC and SMCCC For applications where the delay is not important, a large number is acceptable For real tune applications or for quasi-real tune applications it is important to use a number of iterations as low as possible maintaining the advantages of this techmque The necessary number of iterations depends upon the Eb/N0 ratio m the receiver In Figure 51, we present this relationship for the SMCCC case, we represent values of ~ V„ below 0 1 dB, for values around 2 dB it is sufficient to use a number below 10 iterations
4 5 Compaπsons
The PMCCC has a floor-eπor around a BER ot 10"°, The reason for this is that the SMCCC functions m an mner and outer encoder structure, while the PMCCC functions as two parallel encoders In Figure 52 we present the floor eπor effect for PMCCC and that SMCCC does not show the floor eπor effect at least until BER of 10'9 For simulation after 109 a lot of tune is required and it is not possible to give a simulation result
5 Reed-Solomon Codes and Turbo Codes for ADSL svstems 5 1 Encoder
Figure 90 represents the proposed encoder A PMCCC encoder is a combination ot two simple encoders The mput is a block of information bits The two encoders generate paπty symbols (uo and u'o) from two simple recursive convolutional codes The kev innovauon of this techmque is an interleaver T", which permutes the ongmal information bits before mput to the second encoder The permutation performed bv the mterleaver allows those input sequences for which one encoder produces low-weight codewords to usually cause the other encoder to produce high-weight codewords Thus, even though the constituent codes are individually weak, the combination is surpπsmgly powerful The resulting code has features similar to a random ' block code
In this wav, we have the information svmbol (uι ) and two redundant symbols (uo and u o) With tins redundancy it is possible to reach longer loops, or works at higher bit rates in the same loop, at the cost ot a slight increase ol the constellation encoder In a prefeπed embodiment, the reason we suggest the use two 1/2 convolutional encoders in parallel, is that this simplifies the Turbo-code encoder and it still produces results very close to the channel capacity. The use of two 2/3 convolutional encoders in parallel will produce a little better result (in the order of 0.1 or 0:2 dB) and will increase the complexity at least by four. From a practical point of view we think that is not necessary. In Figure 83, we have presented the conversion that we propose, taking into account the new parity bit.
5.2 Decoder.
In Figure 84, we present the decoder that uses an iterative technique, using two soft-decision input/output trellis decoders in each decoding state. The Maximum-a-Posteriori (MAP) Trellis decoder provides the soft output result suitable for turbo-code decoding. 5.3. Simulations.
In Figure 91 , we present the simulation results that we obtained, with two equal, recursive convolutional consistent codes with an interleaver of length 400 and K=50. With this value the delay is below 5 msec for 1.5 Mbit/s assuming a delay of 3 interleaver.
The number of Iterations is always below 10. The convolutional encoder used is presented in Figure 92.
6. Forward Eπor Conection with Low-densitv parity-check codes
Low-density parity-check codes are codes specified by a matrix containing mostly 0's and only a small number of 1 's.
In particular, an (n, j, k) low-density code is a code of block length n with a matrix like that of Table 24 where each column contains a small fixed number, j, of l's and each row contains a small fixed number, k, of 1' s. Note that this type of matrix does not have the check digits appearing in diagonal form as in Table 25. However, for coding purposes, the equations represented by these matrices can always be solved to give the check digits as explicit sums of information digits.
Table 24 Example of a low-densitv code matrix: N = 20. i = 3. k = 4.
1 1 I i 0 o o o o o o o o o o o o o
0 0 0 0 1 1 1 I O 0 0 O O O 0 0 O 0 O 0
0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1
1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 I 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1. 0 0 0 t 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
1 0 0 0 0 1 0 0 0 0 0 1 0 0 0~~0~0 1 0 "o
0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 Tables 25 Example of pantv-check matπx
INFORMATION CHECK DIGITS DIGITS t Λ K . Λ ,
X\ Xi Xz Xi Xi Xt l
1 1 ϊ 0 1 0 τ,s = Xi t- x b y it3 1 1 0 1 0 1 o X« = 2l ζ 9 Xi b B 34 1 0 1 1 0 0 1 ! Xi w" Xl ξ b *a & 5 *
These codes are not optimum m the somewhat artificial sense of minimizing probability of decodmg error for a given block length, and it can be shown that the maximum rate at which these codes can be used is bounded below channel capacity However, a very simple decodmg scheme exists for low-density codes, and this compensates for theu lack of optimality
The analysis of a low-density code of long block length is difficult because of the immense number of code words involved It is simpler to analyze a whole ensemble of such codes because the statistics of an ensemble permit one to average over quantities that are not tractable in mdividual codes From the ensemble behavior, one can make statistical statements about the properties of the member codes Furthermore, one can with high probability find a code with these properties by random selection from the ensemble
In order to define an ensemble of (n, j, k) low-density codes, consider Table 24 agam Note that the matπx is divided mto j sub-matπces, each containing a smgle 1 m each column The first of these sub-matnces contains all its l's m descending order, I e , the ι' th row contains l's m columns (ι - l)k + 1 to ik The other sub-matnces are merely column permutations of the first We define an ensemble of (n, j, k) codes as the ensemble resultmg from random permutation of the columns of each of the bottom; - 1 sub-matπces of a matπx such as Table 24, with equal probability assigned to each permutation There are two interesting results that can be proven usmg this ensemble, the first concerning the mimmum distance of the member codes, and the second concerning the probability of decodmg eπor
The mmimum distance of a code is the number of positions m which the two nearest code words differ Over the ensemble, the mmimum distance of a member code is a random vaπable, and it can be shown that the distπbution function of this random vaπable can be over bounded by a function As the block length increases, for fixed j ≥ 3 and k > j, this funcUon approaches a unit step at a fixed fraction δ,k of the block length Thus, for large n, practically all the codes m the ensemble have a minimum distance of at least nδ,k, In Table 26 this ratio of typical minimum distance to block length is compared to that for a paπty-check code chosen at random, l e , with a matnx filled in with equiprobable independent bmary digits It should be noted that for all the specific nonrandom procedures known for constructing codes, the ratio of minimum distance to block length appears to approach 0 with mcreasmg block length
The probability of eπor usmg maximum likelihood decodmg for low-density codes clearly depends upon the particular channel on which the code is bemg used The results are particularly simple for the case of the BSC, or bmary svmmetnc channel, which is a binary-input, binary-output, memoryless channel with a fixed probability of transition from either input to the opposite output Here it can be shown that over a reasonable range of channel transition probabilities, the low-density code has a probability of decodmg eπor that decreases exponentially with block length and that the exponent is the same as that for the optimum code of shghtlv higher rate as given m Table 27 Table 26. Comparison of o)k. the ratio of typical minimum distance to block len,gth for an (n. j. k) code, to δ. the same ratio for an ordinary parity-check code of the same rate.
Figure imgf000083_0001
6 0.167 0.255 0.263
1 5 0.2 0.210 0.241
3 4 0.25 0.122 0.214
4 6 .333 0.129 0.173
3 5 0.4 0.044 0 45
3 6 0.5 0.023 0. 11
Table 27. Loss of rate associated with low-densitv codes.
KATB FOB, EQUIVALENT Rate OPTIMUM CODE
3 6 0.5 0.555
3 5 0.4 0.43
4 6 0.333 0.343
3 4 0.25 0.266
Although this result for the BSC shows how closely low-density codes approach the optimum, the codes are not designed primarily for use on this channel. The BSC is an approximation to physical channels only when there is a receiver that makes decisions on the incoming signal on a bit-to-bit basis. Since the decoding procedure to be described later can actually use the channel a posteriori probabilities, and since a bit-by-bit decision throws away available information, we are actually interested in the probability of decoding eπor of a binary-input, continuous-output channel. If the noise affects the input symbols symmetrically, then this probability can again be bounded by an exponentially decreasing function of the block length, but the exponent is a rather complicated function of the channel and code. It is expected that the same type of result holds for a wide class of channels with memory, but no analytical results have yet been derived. For channels with memory, it is clearly advisable, however, to modify the ensemble somewhat, particularly by permuting the first sub-matrix and possibly by changing the probability measure on the permutations. 7. Application to Modem Communications Svstems In a prefeπed embodiment, the use of PMCCC or SMCCC is negotiated independently in each direction of communication in the system. In the case of ADSL, this permits the use of trellis code in one direction and a SMCCC code in the other direction. Computer Program Listing
This patent application includes a computer program listing containing 37 pages, and included as an appendix. The program relates to a Reed-Solomon Encoder and Decoder and a PMCCC Encoder and Decoder for 2 parallel concatenated convolutional codes.
Thus it is seen that the objects, features and advantages of the present invention are efficiently obtained. The preferred embodiment described herein is intended to disclose the best mode of the invention and to teach those having ordinary skill in the art how to make and use the invention, but should not be interpreted as limiting the scope and spirit of the invenUon as embodied in the appended claims.

Claims

What We Claim Is
1 A method of forward eπor coπection for commumcation systems, compnsmg the following steps producmg a symbol stream by forward error codmg of a data stream, modulatmg said symbol stream to produce a modulated signal, and, transmitting said modulated signal over a commumcation link
2 The method recited m Claim 1 wherem said commumcation system is a wired system
3 The method recited in Claim 1 wherem said commumcation system is an optical system
4 The method recited in Claim 1 wherem the modulatmg is accomplished with a multicarπer method
5 The method recited m Claim 4 wherem the multicarπer method is a Discrete Multi-Tone (DMT) method 6 The method recited in Claim 1 wherem the modulating is accomplished with a CAP-QAM smgle earner method
7 The method recited m Claim 1 wherem the modulating is accomplished with a Quadrature Amplitude Modulation (QAM) method
8 The method recited m Claim 1 wherem the modulatmg is accomplished usmg a Pulse Amplitude Modulation (PAM) method 9 The method recited in Claim 1 wherem said producmg of said symbol stream by forward error codmg of a data stream is accomplished with a plurality of convolutional coders
10 The method recited m Claim 9 wherem the plurality of convolutional coders are configured m parallel
11 The method recited m Claim 9 wherem the plurality of convolutional coders are configured in seπes
12 The method recited in Claim 1 wherem said producmg of said symbol stream by forward eπor codmg of a data stream is accomplished with a plurality of non-convolutional coders
13 The method recited in Claim 12 wherem said non-con volutional coder compnses a Reed-Solomon Encoder
14 The method recited m Claim 12 wherem said non-convolutional coder compnses a low density paπty check encoder 15. The method recited in Claim 12 wherem the plurality of non-convolutional coders are configured m senes
16 The method recited in Claim 1 wherem said producmg of said symbol stream by forward error codmg of a data stream is accomplished with a non-convolutional coder and a plurality of convolutional coders
17 The method recited in Claim 16 wherem said non-convolutional coder compnses a Reed-Solomon Encoder
18 A method of peak power level reduction for commumcation systems utilizing a plurality of coders compnsmg the following steps producmg a peak reduced signal by encoding said data stream by said plurality of coders, modulatmg said peak reduced signal, and, transmitting the modulated peak reduced signal
19 A method of forward eπor coπection for commumcation systems, compnsmg the following steps producmg a symbol stream by forward eπor coding of a data stream, modulatmg said symbol stream to produce a modulated signal, transmitting said modulated signal over a communication link, receiving said modulated signal, where said received modulated signal includes eπors, demodulatmg said received signal which mcludes eπors, decoding said demodulated signal by a plurality of convolutional decoders, and, regenerating said data stream and eliminating said eπors 20 A method of forward eπor coπection tor communication systems, compnsmg the following steps receiving a modulated signal from a communications link, where said received modulated signal mcludes errors; demodulatmg said received signal which mcludes eπors, decodmg said demodulated signal by a plurality of convolutional decoders, and, regeneratmg said data stream and eliminating said eπors
21 An apparatus for forward error correction for commumcation systems, compnsmg means for producmg a symbol stream by forward enor codmg of a data stream, means for modulating said symbol stream to produce a modulated signal, and, means for transmitting said modulated signal over a commumcation link 22 The apparatus recited m Claim 21 wherem said commumcation system is a wired system
23 The apparatus recited m Claim 21 wherem said communication system is an optical system
24 The apparatus recited m Claim 21 wherein said means for producing said symbol stream by forward eπor coding of a data stream is accomplished with a plurality of convolutional coders
25 The apparatus recited m Claim 24 wherein the plurality of convolutional coders are configured in parallel 26 The apparatus recited m Claim 24 wherem the plurality of convolutional coders are configured m seπes
27 The apparatus recited m Claim 21 wherem said means for producmg said symbol stream by forward eπor codmg of a data stream is accomplished with a plurality of non-convolutional coders
28 The apparatus recited m Claim 27 wherem the plurality of non-convolutional coders are configured m senes
29 The apparatus recited m Claim 21 wherem said means for producmg said symbol stream by forward eπor codmg of a data stream is accomplished with a non-convolutional coder and a plurality of convolutional coders
30 The apparatus recited m Claim 29 wherem said non-convolutional coder compnses a Reed-Solomon Encoder
31 An apparatus for accomphshmg peak power level reduction for commumcation systems utilizmg a plurality of coders compnsmg means for producmg a peak reduced signal by encodmg said data stream by said plurality of coders, means for modulatmg said peak reduced signal, and, means for transmitting the modulated peak reduced signal
32 An apparatus for forward enor coπection for communication systems, compnsmg means for receivmg a modulated signal from a communications link, where said received modulated signal mcludes errors, means for demodulatmg said received signal which includes eπors, means for decodmg said demodulated signal by a plurality of convolutional decoders, and, means for regeneratmg said data stream and eliminating said eπors
PCT/US1999/017369 1998-07-30 1999-07-30 Forward error correcting system with encoders configured in parallel and/or series WO2000007323A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP99938916A EP1101313A1 (en) 1998-07-30 1999-07-30 Forward error correcting system with encoders configured in parallel and/or series

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US9462998P 1998-07-30 1998-07-30
US60/094,629 1998-07-30
US9839498P 1998-08-30 1998-08-30
US60/098,394 1998-08-30
US13339099P 1999-05-10 1999-05-10
US60/133,390 1999-05-10

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US09744790 A-371-Of-International 1999-07-30
US10/079,202 Continuation-In-Part US20020150167A1 (en) 2001-02-17 2002-02-19 Methods and apparatus for configurable or assymetric forward error correction

Publications (1)

Publication Number Publication Date
WO2000007323A1 true WO2000007323A1 (en) 2000-02-10

Family

ID=27377768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/017369 WO2000007323A1 (en) 1998-07-30 1999-07-30 Forward error correcting system with encoders configured in parallel and/or series

Country Status (2)

Country Link
EP (1) EP1101313A1 (en)
WO (1) WO2000007323A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114050889A (en) * 2021-11-06 2022-02-15 东南大学 Low-power-consumption wide area network anti-interference method with weight error detection
CN114244471A (en) * 2021-11-29 2022-03-25 河南工程学院 Encoding scheme selection method of incoherent LoRa system
CN115225202A (en) * 2022-03-01 2022-10-21 南京大学 Cascade coding and decoding method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0617531A1 (en) * 1993-03-25 1994-09-28 Matsushita Electric Industrial Co., Ltd. Multiresolution transmission system
EP0820159A2 (en) * 1996-07-17 1998-01-21 General Electric Company Satellite communications system utilizing parallel concatenated coding
EP0828363A2 (en) * 1996-09-04 1998-03-11 Texas Instruments Incorporated Multicode modem with a plurality of analogue front ends

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0617531A1 (en) * 1993-03-25 1994-09-28 Matsushita Electric Industrial Co., Ltd. Multiresolution transmission system
EP0820159A2 (en) * 1996-07-17 1998-01-21 General Electric Company Satellite communications system utilizing parallel concatenated coding
EP0828363A2 (en) * 1996-09-04 1998-03-11 Texas Instruments Incorporated Multicode modem with a plurality of analogue front ends

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAVEY M C ET AL: "LOW-DENSITY PARITY CHECK CODES OVER GF(Q)", IEEE COMMUNICATIONS LETTERS, vol. 2, no. 6, pages 165-167, XP000771822, ISSN: 1089-7798 *
VAN EETVELT P ET AL: "PEAK TO AVERAGE POWER REDUCTION FOR OFDM SCHERMES BY SELECTIVE SCRAMBLING", ELECTRONICS LETTERS, vol. 32, no. 21, pages 1963-1964 - 1964, XP000683518, ISSN: 0013-5194 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114050889A (en) * 2021-11-06 2022-02-15 东南大学 Low-power-consumption wide area network anti-interference method with weight error detection
CN114244471A (en) * 2021-11-29 2022-03-25 河南工程学院 Encoding scheme selection method of incoherent LoRa system
CN114244471B (en) * 2021-11-29 2023-07-21 河南工程学院 Coding scheme selection method of incoherent LoRa system
CN115225202A (en) * 2022-03-01 2022-10-21 南京大学 Cascade coding and decoding method
CN115225202B (en) * 2022-03-01 2023-10-13 南京大学 Cascade decoding method

Also Published As

Publication number Publication date
EP1101313A1 (en) 2001-05-23

Similar Documents

Publication Publication Date Title
Forney et al. Modulation and coding for linear Gaussian channels
US7173978B2 (en) Method and system for turbo encoding in ADSL
EP3605906B1 (en) Transmission of probabilistically shaped amplitudes using partially anti-symmetric amplitude labels
US7555052B2 (en) Method and system for a turbo trellis coded modulation scheme for communication systems
Szczecinski et al. Bit-interleaved coded modulation: fundamentals, analysis and design
KR100484462B1 (en) Communication device and communication method
KR101102396B1 (en) Method of matching codeword size and transmitter therefor in mobile communications system
US20020051501A1 (en) Use of turbo-like codes for QAM modulation using independent I and Q decoding techniques and applications to xDSL systems
US6956872B1 (en) System and method for encoding DSL information streams having differing latencies
US20010031017A1 (en) Discrete multitone interleaver
US20030053557A1 (en) Reduced complexity coding system using iterative decoding
KR20010108266A (en) Communication device and communication method
EP1101313A1 (en) Forward error correcting system with encoders configured in parallel and/or series
Li et al. Iterative demodulation, demapping, and decoding of coded non-square QAM
Zhang et al. Turbo coding in ADSL DMT systems
US6671327B1 (en) Turbo trellis-coded modulation
Sadjadpour Application of turbo codes for discrete multi-tone modulation schemes
Eleftheriou et al. Application of capacity approaching coding techniques to digital subscriber lines
Lauer et al. Turbo coding for discrete multitone transmission systems
JP4342674B2 (en) Communication device
KR20070022569A (en) apparatuses for transmitting and receiving LDPC coded data and a method for modulating and demodulating LDPC coded data using LDPC Code
Zhang et al. Turbo coding for transmission over ADSL
Zhang et al. On Bandwidth Efficient Coded OFDM using Multilevel Turbo Codes and Iterative Multistage Decoding
Ho et al. Joint design of a channel-optimized quantizer and multicarrier modulation
Ning et al. Turbo-coded multi-alphabet binary CPM for concatenated continuous phase modulation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1999938916

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1999938916

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 09744790

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 1999938916

Country of ref document: EP