MXPA96001206A - Decoder and method for me - Google Patents

Decoder and method for me

Info

Publication number
MXPA96001206A
MXPA96001206A MXPA/A/1996/001206A MX9601206A MXPA96001206A MX PA96001206 A MXPA96001206 A MX PA96001206A MX 9601206 A MX9601206 A MX 9601206A MX PA96001206 A MXPA96001206 A MX PA96001206A
Authority
MX
Mexico
Prior art keywords
decoder
circuit
bit
probability
bits
Prior art date
Application number
MXPA/A/1996/001206A
Other languages
Spanish (es)
Other versions
MX9601206A (en
Inventor
D Mueller Bruce
M Nowack Joseph
Original Assignee
* Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by * Motorola Inc filed Critical * Motorola Inc
Publication of MXPA96001206A publication Critical patent/MXPA96001206A/en
Publication of MX9601206A publication Critical patent/MX9601206A/en

Links

Abstract

A decoder (223) includes a probability circuit (432) that generates bifurcation metrics using guide bit probabilities that represent structure-to-structure correlation relationships. The branch metrics are input to a decoder (434) that outputs the decoded data bits as a function of the guide bit probabilities for at least the predetermined bit positions.

Description

DECODER AND METHOD FOR THE SAME Field of the Invention The present invention relates to communication systems, and more particularly to the communication of digital signals. Background of the Invention The encoders and decoders are used by transmitters and receivers that communicate information on signal channels. For example, radio telephones, MODEMs, and video systems that include low ratio and high proportion encoders for generating digital signals for communication through a signal channel and decoders to decode the signals received from the signal channel. The signal channel is a pair of twisted wires, a cable, air, or the like. For example, in low-ratio video or acoustic systems, analog signals are converted to a sequence of digital data. This original data sequence is encoded to form a message before transmission using a forward error correction code, such as a convolutional code. The encoded signal is transmitted through the signal channel. The receiver receives a sequence of data corresponding to the message. The received sequence may have been corrupted by the signal channel. To detect the original data sequence, the receiver includes a decoder that is capable of maximum likelihood decoding, or a similar decoding operation. A maximum probability decoder uses the following equation: where P { m ?} it is the a priori probability of the entire message that I have been transmitting; pn () is the density function of the multidimensional probability of the additive noise coming from the channel, p is the received signal sequence, and Si is a possible transmitted signal sequence. The decoder selects the message mi that maximizes equation 1 (that is, has the highest probability). It is also known that in the case of additive white Gaussian noise with a variance of s2, the receiver must find the minimizing message: (p-s1) 2-2s2lnP. { my > (2) The first term, (p-Si) 2 is the distance Euclidean square in the signal constellation between the signal sequence received p and a possible sequence of Signs Yes. The second term, 2s2lnP. { my} , takes into consideration the a priori probability of the message transmitted. The receivers that select the go message that minimizes equation 2 are called receivers of maximum probability to pos teri ori (MAP). Although these two equations are widely used, there are difficulties in the implementation of each of them. The probabilities of the a priori codeword are not precisely known in the decoder, making optimal decoding impossible. Furthermore, if the messages are equally likely, the second term in equation 2 has no weight on the decision, and therefore can be omitted, resulting in a maximum likelihood receptor (ML) where the variance of the noise and probabilities a Priori messages are not considered. ML decoding is typically implemented in practice using, for example, Viterbi decoding in the case of convolution codes. The Viterbi decoders of convolution codes carry out error correction on demodulated data when searching for the best path through a "lattice". In figure 11 a section of the lattice is illustrated. In figure 11, the lattice decoder will select path 00 or 10 at point A based on a "metric" generated from the square Euclidean distance between the received data sequence and a possible coded sequence ending at point A the last bits being coded either 00 or 10. The metric is calculated as a function of the sum of the square Euclidean distances for the previous bifurcations on the surviving trajectories through the lattice plus a metric for the trajectory ending at that point. The path (00 or 10) that has the best metric is selected, and the metric is stored for the best trajectory. The lattice decoder also selects from trajectories 11 and 01 for point B using the square Euclidean distances and the metrics for the trajectories ending at point B. The lattice decoder eliminates the path that has the worst metric, and stores the associated metric with the best trajectory. The lattice decoder then repeats the path selection operation for each of the points C and D in the lattice. The metrics are stored for the selected path for each of these points. The Viterbi decoder thus performs an add-compare-select (ACS) function at each point in the lattice. The process is repeated until all points of the lattice structure are processed, and the best path through the lattice structure is selected from the stored metrics.
A prior art decoder has been proposed which uses the structure-to-structure correlation of the acoustic code parameters to decode the acoustic signals. This decoder searches for a multi-bit parameter and takes a single path selection decision after evaluating the relationship between a possible parameter value X in a current structure and each of the Y values that the parameter can have in a previous structure. In this way, for a parameter having five binary bits, the decoder selects one of 32 possible paths between the 32 possible values in a previous structure and a possible X value in a current structure. A single trajectory is selected for a possible X value by considering the probability that the parameter will have its current value X if the previous value was Y, having Y each of the 32 possible previous values Y. This decoder uses a correlation memory, storing thirty-two probability values P { X / Y } for this current value X, to make the selection. Each probability value P { X / Y } is the probability that the parameter has an X value in a current structure if the parameter had a Y value in a previous structure. Because there are 32 possible values X for the five parameters of binary bits and 32 possible values Y for each possible value X, the correlation memory must store a correlation metric matrix of 32 by 32 for this parameter. Other parameters will require additional large, metric memories. In this way, the resulting guide parameter decoder that uses structure-to-structure parameter correlation values is very complex to implement. According to the above, it is desirable to provide a decoder with improved operation characteristics that do not require a highly complex decoding operation. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a schematic circuit in the form of a block diagram of a communication system. Figure 2 is a schematic circuit in the form of a block diagram of another communication system. Figure 3 is a schematic circuit in the form of a block diagram of a digital encoded communication system. Figure 4 is a schematic circuit in the form of a block diagram of a decoder according to Figure 3. Figure 5 is a schematic circuit in the form of a block diagram of a decoder according to Figure 4. Figure 6 is a schematic circuit in the form of a block diagram of a decoder according to Figure 5. Figure 7 is a diagram illustrating the decoded bit errate (BER) as a function of the bit position of the decoder output . Figure 8 is a flowchart representing the relocation of the bits of a structure in an encoder. Figure 9 is a flow diagram representing the decoding of the structure. Figure 10 is a flowchart representing the relocation of the bits in a structure in a decoder. Figure 11 illustrates a section of a lattice structure. Detailed Description of the Invention A communication system includes a decoder that stores the correlation values on a bit-by-bit basis. A bifurcation metric is generated for at least the predetermined bit positions that take advantage of the high structure-to-structure correlation of some bits to improve the rate of decoded bit er. The system also allows the bits of a structure to be ordered independently of other bits in a parameter. This is particularly advantageous since the Requesters have ordered the bits of the structure in a way that maximizes voice quality, takes advantage of the correlation between the bit structures, and allows real time reception of the acoustic signal without a calculation. of the highly complex bifurcation metric generator. In Figure 1 there is illustrated a communication system 100 in which the invention can be employed. The system includes a transceiver 114 communicating with a transceiver 116 through a signal channel 106. The transceiver 116 employs a transmitter 105 and a receiver 111. The transmitter 105 conditions the signals for transmission on the signal channel 106. The receiver 111 conditions the signals received from the signal channel 106 to be used by the downstream circuitry, such as a coder / decoder (CODEC) 110. The CODEC 110 codes the signals to be communicated by the transceiver 116 and decodes the signals received by the receiver 111. Similarly, the transceiver 114 includes a transmitter 117 and a receiver 121. A CODEC 108 is coupled to the transceiver 114. The transceiver may include a modulator / demodulator (MODEM) and may be employed by a computer or other device for data communication, a radio, a radio telephone, a landline telephone, or any another communication device. The transmission means is one or more pairs of twisted wires, one or more coaxial cables, optical fibers, air, or any other conventional communication means. In figure 2 a radio communication system is illustrated. The radio communications system includes at least two devices 200, 201, and 213, such as two-way radios, cell phones, cordless telephones, base stations, or the like. In the case of cellular radiotelephones, devices 201 and 213 are radiotelephones and device 200 is a fixed site, or a base station. Alternatively, for a wireless telephone, the device 200 is a base, and the device 201 is an associated wireless headset. For two-way radios, devices 201 and 213 communicate directly. Regardless of the environment, the device 201 (a remote communication device) includes a microphone 202 and a coding circuit 207, which is illustrated as a voice encoder, for converting the output of analog signals by the microphone into a digital signal applied to the transmitter 105. The transmitter 105 modulates. the encoded signal and supplies the signal to the antenna 203. The signals received by the communication device 201 are detected by the antenna 203 and are supplied to the receiver 111, which demodulates the signals and outputs a demodulated signal to a decoder circuit 209, which is illustrated as a voice decoder. The decoder circuit 209 converts the signal to an analogous signal that is handled by the speaker 204. The signal communicated to the device 200 (a fixed site) from the device 201 (a remote communication device) is detected by the antenna 206, is demodulated by the receiver 212, is decoded by decoder circuit 214 and is input to a hybrid 216. Hybrid 216 separates the reception and transmission paths of device 200, and supplies the decoded signal emitted by the decoder to landline 225 for communication to a local office (not shown). The signals received from the land line 225 are coupled to an encoder 226 through a hybrid 216. The encoded signals emitted by the encoder circuit 226, illustrated as a voice encoder, are input to the transmitter 228 which directs the antenna 206 The device 213 (a remote communication device) includes an antenna 215, a transmitter 117, and an encoder circuit 219, illustrated as a speech encoder coupled to a microphone 220. The device also includes a receiver 121, a decoder circuit 223. , and a speaker 224. The second device operates substantially in the same manner as device 201. Those skilled in the art will recognize that devices 201 and 213 will communicate directly in the case of two-way radios, without the device 200. The encoder circuit 207 (Fig. 3) includes an analog-to-digital (A / D) converter 303 connected to a microphone 202 to convert the analog signals. ogas emitted by the same to digital signals. The encoder circuit further includes a digital source encoder 316, a group selection circuit 305, a forward error correction encoder (FEC) 318, and an interleaver 320. The digital source encoder 316 generates data sequences for its transmission through the transmitter. The group selection circuit 305 is connected to an encoder 316 to reconstruct the digital signal emitted by the group selection circuit. The FEC 318 encoder encodes the data emitted by the group reselection circuit. The interleaver 320 interleaves the data emitted by the FEC encoder with other data bits for transmission through the signal channel 106. Although there are advantages in providing the interleaver 320, the interleaver can be omitted from the encoder circuit 207 since the interleaver is not necessary for the present invention. The encoder circuits 219 and 226 have substantially the same construction as the encoder circuit 207. The encoder circuit 207 may be implemented in one or more microprocessors, a digital signal processor, MODgEMs, combinations thereof, or discrete circuit components. The transmitter 105 modulates and amplifies the coded signal emitted by the encoder circuit 207 for transmission over the signal channel 106. The receiver 121 demodulates the signals received from the signal channel 106. The decoder circuit 223 includes a flexible decision circuit 322 (FIG. 3), a deinterleaver 324, a FEC decoder 326, a group reselection circuit 330, a source decoder 328, and a D / A converter 332. The flexible decision circuit 322 converts the signals input from of the signal channel 106 to predetermined digital levels. Although a flexible decision circuit is illustrated, those skilled in the art will recognize that a permanent decision circuit may be employed instead of the flexible decision circuit. A deinterleaver 324 is connected to the output of the flexible decision circuit to remove the data interleaved by the interleaver 320. If the interleaver 320 of the encoder circuit is omitted, the deinterleaver 324 is not used in the decoder circuit 223. The decoder of FEC 326 it is connected to the output of deinterleaver 324 to decode the data emitted by it. The decoded signal is reconstructed in the group reselection circuit 330 and input to the source decoder 328. The source decoder 328 decodes the data emitted by the FEC decoder 326. The output of the source decoder is converted to a digital signal in a digital-to-analog converter (D / A) 332, it is amplified by an amplifier (not shown), and is output to a speaker 224 on the conductor 329. The decoder circuits 209 and 214, which are substantially similar to the decoder circuit 223, are illustrated as voice decoders. The decoder 223 may be implemented in one or more microprocessors, a digital signal processor, MODEMs, combinations thereof, or discrete circuit elements. The forward error correction decoder 326 includes a probability circuit 432 (FIG. 4) and a lattice decoder 434. The probability circuit receives the de-interleaved data structures on the conductor 325 at the input 425.
The input 425 is coupled to a data signal source (through the antenna 215 of Figure 2 and the receiver 121). The probability circuit 432, which is a bifurcation metric generator, supplies a bifurcation metric to the lattice decoder 434 and an output coupled to the signal bus 436. The probability circuit generates a bifurcation metric as a function of a probability of guide bits during at least the predetermined bit positions as described in more detail below. The lattice decoder emits a stream of data on the conductor 327 in response to the branch metrics. The probability circuit 432 includes a bifurcation metric generator 540 (FIG. 5), a memory circuit 542, which is a structure-to-structure correlation metric memory, and a prior structure storage circuit 545. The prior structure storage circuit 545 retains the previous structure emitted by the lattice decoder 434, and is implemented using a shift register, random access memory (gRAM) or the like. The memory circuit 542 stores a 2-by-2 probability metric matrix for each bit in a structure. The memory can be implemented using a read-only memory (ROM), such as a read-only, programmable and electronically erasable memory (EEPROM), a random access memory (RAM) with battery backup to prevent the loss of energy in the memory. same, or similar. The memory output is generated on a bit-by-bit basis, for each bit in the previous structure. The values stored in the memory are generated empirically by storing the digitized voice, in the form of data values of voice signal structure, over time. The number of times a bit remains 0 or 1 for two sequential structures, and the number of times a bit value changes over two sequential structures, on a bit position basis, were measured. The probabilities were generated from these counts. The probabilities emitted by the memory are supplied to a branch metric generator via a signal bus 558. The branch metric is supplied to a lattice decoder 434 through a (signal bus 436. The possible coded signals are they emit the branch metric generator on bus 528. The lattice decoder selects a branch according to the value of the branch metric.If the value of the branch metric is better for a path, the lattice decoder will choose that path. the value is the same for both trajectories, the lattice decoder will arbitrarily choose a path The bit value for each bit position is selected from the path through the lattice that is selected The decoded • bits of the lattice decoder they are emitted on the conductor 327. Figure 6 illustrates a novel circuit that can be used advantageously amente to generate the bifurcation metric. This circuit includes a square Euclidean distance circuit 650 that generates the sum: where pi is the input data bit, Si is a possible branching symbol (constellation) emitted from the lattice decoder 434, and n is the number of symbols per latticed bifurcation. The sum emitted by the square Euclidean distance generator is input to an adder 552. The decoded output data from the lattice decoder 434 on the conductor 327 is input to the prior structure storage circuit 545. The L bits in the storage circuit of Pre-structure 545 is input to the memory circuit 542 on a bit-by-bit basis through the conductor 557. For each bit, two associated probabilities are emitted with respect to a bit in a current structure by the memory circuit 542 on the signal bus 558. A probability is the probability that the bit value will change from structure-to-structure, in view of the position of the bit and the value of the bit in the previous structure. The other value is the probability that the value of the bit will change in view of the position of the bit and the value of the bit in the previous structure. In this way, the memory receives the value of the previous bit in an input and outputs the probability that this bit will change and the probability that it will remain the same. The output of the memory circuit is essentially a sequence of correlation values of the guide bit structure associated with at least the positions of the selected bit in the data structures. The probabilities emitted by the memory, or structure-to-structure correlation values, are input to an a priori distortion circuit 660. The distortion circuit combines an estimate of the? 2 noise variance, s, on a bus 562, with the two probabilities emitted by the memory circuit 542, and? 2 produces two respective output values, 2slnP. { j / k} , where one value is a function of the probability of the bit that changes and the other value is a function of the probability of the bit that remains the same. These two values are combined with the sum of the Euclidean square distance, in an adder 552, to produce two bifurcation metrics that are coupled to the lattice on the bus 436. A reduction to the practice of the invention in a conventional GSM system is describe now as an example. The digital source coder 316 is a GSM digital voice coder that outputs a sequence of data blocks, or structures, where each structure corresponds to 20 ms of speech, contains 18 parameters, and has a total of 112 bits. The group selection circuit 305 produces the parameters in a unique sequence and format. Tables 1 and 2 list the sequence of parameters and the number of bits assigned to each parameter. Table 1 lists the sequence for a deaf structure (MODE = 0).
Table 1. Encoder output parameters in order of real-time occurrence and bit allocation for the deaf acoustic structures (MODE 0). Vrille Name Bit Number Name Number (MSB-SB) gPar < jUBetroß Parameter MODO conversation mode bO-bl RO energy of the structure b2-b6 LPC12 rl-r3 reflection coefficient 11 b7-bl7 PC23 r4-r6 reflection coefficient bl8-b26 LPC34 r7-rl0 reflection coefficient b27-b34 INT_LPC bit of interpolation b35 C0DIG01_1 read. Code I of the book of b36-b42 keys of the substructure C0DIG02_1 read. Code H of the book of b43-b49 keys of the substructure GSP0_1 read. Code { PO, GS} of the b50-b54 substructure C0DIG01_2 2nd. Code I of the b55-b61 book keys of the substructure C0DIG02_2 10 2nd. Code H of the book of b62-b68 keys of the substructure GSP0_2 11 2nd. Code { PO, GS} of the b69-b73 substructure C0DIG01_3 12 3rd. Code I of the b74-b80 book keys of the substructure C0DIG02_3 13 3rd. Code H of the book of b81-b87 keys of the substructure GSP0_3 14 3rd. Code { PO, GS} of the b88-b92 substructure C0DIG01_4 15 4th. Code I of the book of b93-b99 keys of the substructure C0DIG02_4 16 4th. Code H of the book of bl00-bl06 keys of the substructure GSP0_4 17 4th. Code { PO, GS} of the bl07-blll substructure Table 2 lists the sequence for a harmonized structure (MODES = 1, 2, or 3). Table 2. Encoder output parameters in order of real time occurrence and bit allocation for harmonized acoustic structures (MODE 1, 2, 3) Variable Name Number Number of Bit Number (M8B-LSB) Parameters Parameter MODE 0 talk mode bO-bl RO energy of the structure b2-b6 LPC12 rl-r3 reflection coefficient 11 b7-bl7 LPC23 r4-r6 reflection coefficient bl8-b26 LPC34 r7-rl0 reflection coefficient b27-b34 INT_ PC bit interpolation b35 LAG_1 read. delay of the b36-b43 substructure C0DIG0_1 read. Code I of the b44-b52 book keys of the substructure GSP0_1 read. Code { PO, GS} of the b53-b57 substructure LAG_2 2nd. delta delay of the b58-b61 substructure C0DIG0_2 10 2nd. Code I of the book of b62-b70 keys of the substructure GSP0_2 11 2nd. Code { PO, GS} of the b71-b75 substructure LAG_3 12 3rd. delta delay of the b76-b79 substructure C0DIG0_3 13 3rd. Code I of the book of b80-b88 keys of the substructure GSP0_3 14 3rd. Code { PO, GS} of the b89-b93 substructure LAG_4 15 4th. delta delay of the b94-b97 substructure C0DIG0_4 16 4th. Code I of the book of b98-bl06 keys of the substructure GSP0_4 17 4th. Code { PO, GS} of the bl07-blll substructure The bits are placed in the structure according to their importance. The inventors of the present invention have carefully evaluated the performance of devices that employ voice coders, and identified the relative importance of the bits in an acoustic structure, as listed in Tables 3 and 4.
Table 3. Importance of the coded bits for the deaf acoustic structures (MODE 0). Class of Variable Number Number Parameter Parameters of bits 1 MODE bO, bl RO i b2 .......... 2 RO b3 LPC1 2 b7 RO "T" b4 LPC1 2 b8, b9, blO, bll LPC2 3 bl8, bl9 3 GSP0_1 8 b50, b51 GSP0_2 11 b69, b70 GSP0_3 14 b88, b89 GSP0_4 17 bl07, bl08 .......... LPC1 bl2, bl3 LPC2 3 b20 LPC3 4 b27, b28, b29 4 GSP0_1 8 b52 GSP0_2 11 b71 GSP0_3 14 b90 GSP0_4 17 bl09 .......... RO b5, b6 LPC1 2 bl4, bl5, bl6, bl7 LPC2 3 b21, b22, b23, b24, b25, b26 LPC3 4 b30, b31, b32, b33, b34 5 INT_LPC 5 b35 GSP0_1 8 b53, b54 GSP0_2 11 b72, b73 GSP0_3 14 b91, b92 GSP0_4 17 bllO, blll C0DIG02_1 7 b43, b44, b45, b46, b47, b48, b49 C0DIG01_2 9 b55, b56, b57, b58, b59, b60, b61 6 C0DIG02_2 10 b62, b63, b64, b65, b66, b67, b67, b68 C0DIG01_3 12 b74, b75, b76, b77, b78, b79, b80 C0DIG02_3 13 bdl, b82, b83, b84, b85, b86, b87 C0DIG01_4 15 b93, b94, b95, b96, b97, b98, b99 CODIG02 4 16 blOO, blOl, bl02, bl03 bl04, bl05, bl06 Table 4. Importance of the coded bits for harmonized acoustic structures (MODE 1, 2, or 3). Class of Number Name Variable Number Parameter Bit Parameters MODE 0 bO, bl RO 1 b2, b3, b4 LPC1 2 b7, b8, b9, blO, bll LAG_1 6 b36, b37, b38 ....... .. "RO? .5 LPC1 2 bl2, bl3, bl4, bl5 LPC2 3 bl8, bl9 LAG_1 6 b39, b40, b41 LAG_2 9 b58, b59 LAG_3 12 b76, b77 LAG_4 15 b94 ......... "RO b6 LPC1 2 bl6, bl7 LPC2 3 b20, b21, b22 LPC3 4 b27, b28, b29, b30 LAG_4 15 b95 GSP0_1 8 b53 GSP0_2 11 b71 GSP0_3 14 b89 GSP0_4 17 bl07 LPC3 4 b31, b32 LAG_1 6 b42 LAG_2 9 b60 LAG_3 12 b78 GSP0_1 8 b54, b55 GSP0_2 11 b72, b73 GSP0_3 14 b90, b91 GSP0_4 17 bl08, bl09 LPC3 4 b33, b34 INT_LPC 5 b35 LAG_1 6 b43 LAG_2 9 b61 LAG_3 12 b79 LAG_4 15 b96, b97 GSP0_1 8 b56, b57 GSP0_2 11 b74, b75 GSP0_3 14 b92, b93 GSPO 4 17 bllO, blll Table 4 (continued) Class of Variable Number Number Name Parameter Bit Parameters C0DIG0_1 7 b44 - b52 6 C0DIG0-2 10 b62 - b70 C0DIG0_3 13 b80 - b88 C0DIG0_4 16 b98 - bl06 In this way, it can be seen that the order of the acoustic bits encoded in the FEC encoder depends on the MODE. Table 5 lists the re-ordered sequence of bits for the deaf structures emitted by the group selection circuit 305, where the value b (for example bO) is the number of bits before reordering and the number immediately to the right of the value b (for example 94) is the position of the bit. In this way, bit 0 (bO) of the original structure is in position 94 (the 95th bit) in the reordered structure. Those bits that are not assigned are indicated by a prefix "u".
Table 5. Re-ordering of the acoustic bit before the FEC encoding for the deaf acoustic structures (MODO-0) and after. bO 94 b28 12 b56 44 b84 36 bl 93 b29 13 b57 43 b85 ul6 b2 92 b30 14 b58 42 b86 ul5 b3 91 b31 15 b59 41 b87 ul4 b4 89 b32 16 b60 40 b88 82 b5 0 b33 17 b61 18 b89 75 b6 38 b34 37 b62 19 b90 4 b7 90 b35 39 b63 20 b91 66 b8 88 b36 59 b64 21 b92 61 b9 87 b37 58 b65 22 b93 ul3 blO 86 b38 57 b66 23 b94 ul2 bll 85 b39 56 b67 24 b95 ull bl2 73 b40 55 b68 25 b96 ul0 bl3 72 b41 54 b69 83 b97 u9 bl4 71 b42 53 b70 76 b98 u8 bl5 70 b43 52 b71 3 b99 u7 bl6 69 b44 51 b72 67 blOO u6 bl7 6 b45 50 b73 62 blOl u5 bl8 80 b46 49 b74 26 bl02 u4 bl9 79 b47 48 b75 27 bl03 u3 b20 78 b48 47 b76 28 bl04 u2 b21 7 b49 46 b77 29 bl05 ul b22 8 b50 84 b78 30 bl06 uO b23 9 b51 77 b79 31 bl07 81 b24 10 b52 2 b80 32 bl08 74 b25 11 b53 68 b81 33 bl09 5 b26 64 b54 63 b82 34 bllO 65 b27 1 b55 45 b83 35 blll Table 6 lists the bit sequence of the acoustic structures emitted by the group selection circuit 305 for the acoustic structures. Table 6. Re-ordering of the acoustic bit before the FEC coding for the harmonized acoustic structures (MODE - 1, 2, or 3). bO 94 b28 63 b56 17 b84 33 bl 93 b29 62 b57 18 b85 ul6 b2 92 b30 60 b58 19 b86 ul5 b3 91 b31. 59 b59 20 b87 ul4 b4 83 b32 58 b60 21 b88 ul3 b5 72 b33 48 b61 22 b89 ul2 b6 65 b34 46 b62 23 b90 ull b7 90 b35 44 b63 24 b91 ul8 b8 89 b36 86 b64 25 b92 u9 b9 88 b37 85 b65 5 b93 7 blO 87 b38 84 b66 9 b94 11 bll 82 b39 76 b67 13 b95 15 bl2 81 b40 75 b68 36 b96 38 bl3 80 b41 71 b69 40 b97 42 bl4 79 b42 57 b70 26 b98 u8 bl5 0 b43 53 b71 27 b99 u7 bl6 1 b44 74 b72 28 blOO u6 bl7 67 b45 70 b73 29 blOl u5 bl8 78 b46 56 b74 34 bl02 u4 bl9 77 b47 52 b75 35 bl03 u3 b20 2 b48 73 b76 30 bl04 u2 b21 3 b49 69 b77 31 bl05 ul b22 4 b50 55 b78 32 bl06 or b23 64 b51 51 b79 6 bl07 8 b24 61 b52 68 b80 10 bl08 12 b25 49 b53 54 b81 14 bl09 16 b26 45 b54 50 b82 37 bllO 39 b27 66 b55 47 b83 41 blll 43 The circuit group selection 305 thus locates the bits that have the most importance in such a way that they introduce the FEC 318 encoder at the end. Figure 7 shows lattice bit position versus bit error rate for a decoded acoustic structure in a GSM system. As can be seen, the bit error rate is the lowest for bits placed near the front and back of the lattice. Accordingly, the Applicants have determined that the highest priority bits should be placed in front and in the back of the lattice after rearrangement in the group selection circuit for higher quality acoustics. The operation of the coding system will now be described with reference to Figures 8-10. Prior to encoding in the FEC encoder 318 of FIG. 3, a structure is encoded in the group selection circuit 305 as indicated in block 800 (FIG. 8). Initially it is determined whether the structure is a deaf acoustic structure or a harmonized acoustic structure, in the decision block 802. If the structure is a deaf acoustic structure (Mode 0, Table 1), the bits are rearranged or reassigned in the identified positions in Table 5, as indicated in block 806. In this table, the number b to the left represents the position of the original structure and the number immediately to the right of the value b is the position of the bit. In this way, after reordering the bits, bit 0 (bO) moves to position 94; bit 1 (bl) moves to position 93; and bit 28 (b28) moves to position 12. If the word is a harmonized acoustic structure (Modes 1, 2, and 3), as determined in decision block 802, the bits are reassigned according to the Table 6, as indicated in block 808. The values and numbers of ba to the right thereof represent the original position of the bit and the position of the rearranged bit in table 6. The reordered bits are emitted to the FEC encoder (218 in 2) as indicated in block 810. The FEC encoder 318, the interleaver 320, and the transmitter 105 condition the encoded signal for transmission through the signal channel 106 (see FIG. 3). The receiver 121 (FIG. 3) demodulates the received signal. A flexible decision is made to the extent that the data bit levels use a flexible decision circuit 322. The interleaved data is re-stored in the deinterleaver 324, which is complementary to the interleaver 320. The FEC 326 decoder receives the deinterleaved bits from deinterleaver 324, as indicated in block 900 (FIG. 9). The input bits are introduced in series to the square Euclidean distance generator 550, which outputs a sum (from 1 to n) of the square Euclidean distance (p ± -Si) 2, as indicated in block 902, where Pi is the bit of input data from the deinterleaver and Si is a predicted bit value in the lattice decoder. This value of the sum is output to the additive 552. A probability is generated from the bits in the previous structure stored in a prior structure storage circuit 545, as indicated in block 904. These bits are entered individually to memory circuit 542. Memory circuit 542 stores probabilities P { k / j} which are illustrated in Table 7. Table 7 lists the probabilities P { j | } for each of the 95 bits that enter the FEC 318 encoder (from figure 3), where P { j | k} is the probability that the bit in the current structure has a value j if the value of the bit in the same bit position of the previous structure had the value k. A value of 1.0 means that the bit always has the same value as the previous structure (a high ratio of structure-to-structure correlation) and a value of 0.5 means that the value of the bit is completely independent of the value in the previous structure (a low structure-to-structure correlation relationship). Conditional probabilities of the acoustic bit value over the bit value of the previous acoustic structure. Bit p. { olo} p. { ? lo > p. { or | i) p. { i | ? > 0 0.609 0.391 0.438 0.562 1 0.576 0.424 0.410 0.590 2 0.520 0.480 0.312 0.688 3 0.603 0.397 0.398 0.602 4 0.562 0.438 0.369 0.631 5 0.584 0.416 0.393 0.607 6 0.547 0.453 0.354 0.646 7 0.506 0.494 0.332 0.668 8 0.569 0.431 0.365 0.635 9 0.471 0.529 0.392 0.608 10 0.456 0.544 0.396 0.508 0.602 11 0.500 0.500 0.427 0.573 12 0.493 0.507 0.387 0.613 13 0.551 0.449 0.456 0.544 14 0.485 0.515 0.442 0.558 15 0.520 0.480 0.460 0.540 16 0.479 0.521 0.438 0.562 17 0.484 0.516 0.481 0.519 18 0.478 0.522 0.489 0.511 19 0.494 0.506 0.501 0.499 20 0.501 0.499 0.520 0.480 21 0.498 0.502 0.522 0.478 22 0.517 0.483 0.508 0.492 23 0.504 0.496 0.498 0.502 24 0.505 0.495 0.494 0.506 25 0.494 0.506 0.512 0.488 26 0.481 0.519 0.484 0.516 27 0.498 0.502 0.504 0.496 28 0.507 0.493 0.499 0.501 29 0.503 0.497 0.504 0.496 30 0.509 0.491 0.492 0.508 31 0.499 0.501 0.510 0.490 32 0.495 0.505 0.501 0.499 33 0.470 0.530 0.491 0.509 34 0.509 0.491 0.503 0.497 35 0.509 0.491 0.511 0.489 36 0.487 0.513 0.462 0.462 0.538 37 0.527 0.473 0.473 0.527 38 0.609 0.391 0.502 0.498 39 0.459 0.541 0.420 0.580 40 0.514 0.486 0.503 0.497 41 0.531 0.469 0.495 0.505 42 0.507 0.493 0.521 0.479 43 0.491 0.509 0.488 0.512 44 0.390 0.610 0.295 0.705 45 0.504 0.496 0.468 0.532 46 0.500 0.500 0.404 0.596 Bit p. { ol or > p. { ilo} p. { ol? > P { ? li) 47 0.473 0.527 0.494 0.506 48 0.506 0.493 0.397 0.603 49 0.527 0.473 0.452 0.548 50 0.540 0.460 0.487 0.513 51 0.503 0.497 0.505 0.495 52 0.483 0.517 0.488 0.512 53 0.478 0.522 0.483 0.517 54 0.574 0.426 0.553 0.447 55 0.530 0.470 0.528 0.472 56 0.525 0.475 0.520 0.480 57 0.505 0.495 0.513 0.488 58 0.504 0.496 0.376 0.624 59 0.535 0.465 0.374 0.626 60 0.577 0.423 0.401 0.599 61 0.520 0.480 0.458 0.542 62 0.664 0.336 0.353 0.647 63 0.599 0.401 0.310 0.690 64 0.519 0.481 0.384 0.616 65 0.538 0.462 0.492 0.508 66 0.649 0.351 0.293 0.447 70 0.587 67 0.490 0.510 0.445 0.555 68 0.425 0.575 0.379 0.621 69 0.569 0.431 0.559 0.441 70 0.586 0.414 0.540 0.460 71 0.542 0.458 0.478 0.522 72 0.608 0.392 0.439 0.561 73 0.476 0.524 0.372 0.628 74 0.452 0.548 0.362 0.638 75 0.555 0.445 0.396 0.604 76 0.633 0.367 0.322 0.678 77 0.715 0.285 0.390 0.610 78 0.738 0.262 0.223 0.777 79 0.604 0.396 0.368 0.632 80 0.533 0.467 0.295 0.705 81 0.546 0.454 0.245 0.755 82 0.588 0.41 2 0.265 0.725 83 0.664 0.336 0.258 0.742 84 0.720 0.280 0.220 0.780 85 0.736 0.264 0.256 0.744 86 0.761 0.239 0.224 0.776 87 0.582 0.418 0.246 0.754 88 0.740 0.260 0.291 0.709 89 0.762 0.238 0.254 0.746 90 0.705 0.295 0.150 0.850 91 0.865 0.135 0.357 0.643 92 0.890 0.110 0.104 0.896 93 0.653 0.347 0.281 0.719 94 0.886 0.114 0.176 0.824 For example, if bit bO was in bit position 94 (reordered) in the previous structure, the probability that this is 0 in the current structure is 0.886; if the previous bit bO at bit position 94 in the previous structure was 0, the probability that the bit in the same bit position of the current structure is 1 is 0.114; if the value of the bit bO (position 94) in the previous structure was 1, the probability that it is 0 in the following structure is 0.176; if the previous bit bO (position 94) was 1, the probability that the next bit is 1 is 0.824. The lattice contains 95 bits of data, 3 bits of cycle redundancy check (CRC), and 6 end bits, which require 104 units of time. Each unit of time in turn has 64 ACSs. Each ACS contains two bifurcation metrics. Each bifurcation metric has a correlation value. According to the above, the bifurcation metric generator 540 outputs two values for each point in the lattice. For example, the correlation values emitted from the memory 442 are 0.886 and 0.114 when the memory entry for the previous structure is 0 for the bit position 94. The correlation values emitted by the memory 442 are 0.176 and 0.824 when the input to the memory is 1 for the bit position 94. Each of the bits of the previous structure is sequentially input to the memory circuit 542, and the value of the bit in the same bit position as the bit that is is decoding in the lattice decoder 434 for the current structure, it is used to calculate the respective bifurcation metrics for that structure. The a priori distortion for each branch is generated in a primary distortion circuit 660 (shown in FIG. 6) from the probability emitted by the memory, as indicated in block 906 (FIG. 9). The respective outputs of the a priori distortion circuit are? 2 the product of a distortion estimate (2s) at input 662 and a respective one of the natural logarithms of the two probabilities (lnP { J / k.}.) Emitted from the memory circuit 542 for each bit in the prior structure storage circuit 545. It will be recognized that the memory circuit 542 advantageously stores the lnP values. { j / k} , in such a way that the securities issued by memory and introduced to a distortion circuit a priori are the natural logarithm of probability. ? 2 Because the product 2s lnP. { j / k} is a negative value, subtracting this negative value from the distance Euclidean square is really a sum. The product on the conductor 664 is thus added to the sum of the square Euclidean distance in the adder 552. The adder 552 adds the square Euclidean distance to the values a pri ori as indicated in block 908. The lattice decoder 434 leads to perform add-compare-select operations, as indicated in block 910 (figure 9) for all bits in a structure. The lattice decoder may be a convolutional code decoder of feed type in advance or a decoder of convolutional code of back feed type. In any case, the lattice decoder uses the two metrics emitted by the adder 552 and associated with a possible bit value to select a better path to the points in the lattice (e.g., A, B, C, D) as a function of the last reliable structure. By way of example, the operation of a forward feed type decoder will be described for illustrative purposes based on the following considerations: that points C and D in Figure 11 are associated with the bit position 94, that the value at the 94 bit position of the last reliable decoded structure was 0, that points A and C correspond to a value of 0 and that B and D correspond to a value of 1. The probability of 0.886 (the probability P { 0/0.}.) Is used to generate the circuit output of? 2 distortion a priori 2s lnP. { j / k} when the fork metric is generated for path 00 to point C, since this is the probability that bit position 94 will have a value of 0 (point C) if bit position 94 in the last reliable structure had a value of 0. For path 10 to point C, this probability of 0.886 is used to generate the output of the distortion circuit to pri ori. The metric used for path 00 to point C is a function of a stored historical value (path metric) to reach point A plus the branch metric emitted by the 552 addendum for path 00 (which will be the square Euclidean distance associated with point C plus the a priori distortion associated with point C, the which is a function of 0.886). The metric used for path 10 to point C is a function of a historical value (metric of the path) stored for point B plus the a priori distortion circuit emitted by path 01 (which will be the square Euclidean distance associated with point C plus the a priori distortion circuit emitted for point C, which is a function of 0.886). The metric for the two trajectories at point C that has the best value will be selected for point C. The lattice decoder 434 performs similar calculations for trajectories 01 to point D and 11 to point D. The output value of the generated a priori distortion circuit will be a function of 0.114 (the probability P {l / 0.} .) for both trajectories 01 and 11. The resulting metric associated with the selected path to point C will be stored as the path metric at point C. The resulting metric associated with the selected path to point D will be stored as the path metric at point C. The structure-to-structure bit correlation value of 0.886 will weigh more heavily on the metric trajectory at point C than the metric 0.114 of the trajectories will point to point D in this example, due to the high probability associated with the structure-to-structure correlation relationship of this bit position. According to the above, the trajectory through point C will be favored over the trajectory to point D when selecting the best path through the lattice. This facilitates the selection of the best path through the lattice decoder in view of the structure-to-structure correlation of this bit position. Those skilled in the art will recognize that decoder circuit 326 performs similar calculations for all points on the lattice within a structure. The data points selected for the best path through the lattice for the entire structure will be issued by the lattice decoder. A determination will then be made as to whether the structure is reliable or not. If it is a reliable structure, it will be stored in the previous structure storage circuit. If it is not a reliable structure, the structure will not be stored in the previous structure storage circuit. In this way, the structure stored in the previous structure storage circuit is always a good structure, and the probability calculation will always be based on the last reliable structure. Those skilled in the art will also recognize that if the value of the bit position 94 stored in the previous structure was 1 in the previous example, the fork metrics for the trajectories 00 and 01 at point C would have been a function of the probability of 0.176, and the bifurcation metrics for trajectories 10 and 11 to point D would have been a function of the probability of 0.824. These probabilities represent the respective structure-to-structure probabilities P { l / 0} for the bit position 94 and the probability of P { 1/1 } for bit position 94.
The operation of the invention with a lattice decoder of convolution code of the backward feed type will now be described for illustrative purposes based on the following considerations: that points A and B in FIG. 11 correspond to a bit position 94; that points A and C represent a bit value of 0, and that bits B and D represent a value of 1; and in the last reliable decoded structure, the position of bit 94 had a value of 0. The probability of 0.886 (the probability. {0/0.}.) is used to generate the output of the? 2 distortion circuit a priori 2 s lnP. { j / k} when the bifurcation metric is generated for path 00 between points A and C, since this is the probability that bit position 94 will have a value of 0 (point A) if the previous structure had a value of 0. For path 10 between points B and C, the probability of 0.114 is used to generate the a priori distortion circuit output, since this is the probability that bit 94 will be 1 (point B) if the previous structure was 0. The metric used for path 00 is a function of a stored historical value (trajectory metric) to reach point A plus the branch metric emitted by adder 552 for path 00 (which will be the square Euclidean distance associated with point A plus the a priori distortion associated with point A, which it is a function of 0.886). The metric used for path 10 to point C will be a function of a historical value (path metric) stored for point B plus the a priori distortion circuit emitted by path 01 (which will be the square Euclidean distance associated with the point C plus the a priori distortion circuit emitted for point C, which is a function of 0.114). The path to point C that has the best metric will be selected and stored. The lattice decoder 434 performs similar calculations for the trajectory 01 between points A and D and path 11 between points B and D. The output of the a priori distortion circuit will be a function of 0.886 (the probability P { l / 0.} associated with point A) for path 01 and a function of 0.114 (the probability P {l / 0.} associated with point B) for path 11. The best metric for the point D will be selected and stored as the path metric to point D. The best path to point D will be selected from these metrics. Because the trajectory of point A will be weighed more heavily to favor a value of 0 for bit position 94 when the trajectories are selected at both points, the lattice decoder will take advantage of the high structure-to-structure correlation associated with this bit position to more accurately select the value for bit position 94. The stored historical metric (path metric) for the selected path at points C and D may include the output value of the distortion circuit a priori or the output of the a priori distortion circuit of the stored historical value for these points can be withdrawn (subtracted), in the backward search lattice decoder. The decoder circuit 326 performs similar calculations for all points on the lattice within a structure. The data points selected for the best path through the lattice for the entire structure will be issued by the lattice decoder. A determination will then be made as to whether the structure is reliable or not reliable. If it is a reliable structure, it will be stored in the previous structure storage circuit. If it is not a reliable structure, the structure will not be stored in the previous structure storage circuit. In this way, the structure stored in the previous structure storage circuit is always a good structure, and the probability calculation will always be based on the last reliable structure. The group selection circuit 330 (FIG. 3) responds to the output of the lattice decoder 434 in the FEC decoder 326 to move the output bits to their original position, as represented by the flow chart of FIG. 10. First , the group selection circuit introduces the structure coming from the lattice decoder, as indicated in block 1000. The decoder determines whether the structure is a deaf acoustic structure or a harmonized acoustic structure, in the decision block 1002. If the structure is a deaf acoustic structure (Mode 0, Table 1), the bits are rearranged or reassigned, to the positions identified in Table 5, as indicated in block 1006. In the decoder, the bits are relocated from the received position, which is to the right of the bit number b, to the original position in the input of the source encoder, which is the number b. In this way, after reassignment, the bit at position 94 moves to bit 0 (bO); the bit at position 93 moves to bit 1 (bl); and the bit at position 12 moves to bit 28 (b28). If the word is a harmonized acoustic structure (Modes 1, 2 and 3), as determined in decision block 1002, the bits are reassigned according to Table 6, as indicated in block 1008. The reordered bits are processed in the source decoder 328, as indicated in block 1010. The signals emitted by the source decoder are converted to an analog signal in the D / A converter 332 (figure 3), amplified in an amplifier (not shown), and are input to the addressing horn 224. A particularly advantageous aspect of the present invention is that the parameter bits can be relocated for transmission in such a way that they are located in a structure according to their importance. This is possible because the invention uses the probability of other bits that allows all the bits to be considered individually. For example, considering the "talk mode" parameter. From Tables 1 and 2, this is always the first parameter outside the speech coder (regardless of the mode) and always consists of two bits, bO and bl. From Tables 3 and 4, it can be seen that bits bO and bl are always considered in the most important class and are essential for high quality acoustics. Tables 5 and 6 indicate that the bits bO and bl enter the FEC encoder as bits 94 and 93, respectively. Table 7 shows that bit 94 has structure-to-structure bit probabilities of P { 0 | 0} = 0.886, P { 1 | 0} = 0.114, P { 0 | 1} = 0.176, P { 1 | 1} = 0.824. It is likely that bit position 94 (a.k.a., bO) has the same value as it had in the previous structure (that is, it is not likely to change). The present invention takes advantage of these features to place that bit in a location where the bit error rate is low, and uses the high correlation rate to ensure that the correct mode is identified from structure-to-structure. Additionally, the fact that the bits are processed individually, rather than together, allows 2-by-2 metric arrays to be used, instead of a 4-by-4 metric matrix for the parameter. This reduces the possible metric trajectories from 16 to 8, greatly simplifying the path selection in the forward error correction decoder. In summary, the output of the speech encoder consists of a stream of bits of which certain bits have a high correlation with the bits of the previous structure. Bits with the highest correlation also tend to be the most important bits in the acoustic structure. These bits can be assigned to the protection most offered by the FEC Encoder that uses the invention.
It is anticipated that in the operation, if the last decoded acoustic structure is not considered reliable (ie, it is pointed to as a bad structure by a CRC or other conventional error detection method), the current structure can be decoded using conventional decoding. Additionally, if a structure of N voice bits contains only L data bits of the speech coder (L <N) which exhibits strong structure-to-structure bit correlation, then the new branch metric can be used over those L bits and a conventional ML decoding metric can be used in the remaining NL bits. In this way, the square Euclidean distance can be used by the Viterbi decoder to select the paths without using the structure-to-structure correlation value until a structure is considered unreliable. If the current structure is not considered reliable (that is, the value of the structure is unlikely), the last code structure considered reliable (that is, not flagged as a bad structure by a CRC or other error detection method) is used to decode the current structure by using the novel bifurcation metric: N 2? (p, - s,) -2 s lnP. { k / j} (4) where: pi is a bit of the received signal, Si is a value of the possible signal (constellation point) emitted by a lattice encoder,? 2 s is an estimate of a white Gaussian noise variance, and lnP {k / j) is a stored value that represents a correlation between a possible value of bit k over a lattice fork of a current structure and j is the decoded value of the same bit in a previous structure. Conventional Viterbi decoding is carried out on the remaining N-L bits encoded in the structure. Alternatively, the novel metric (equation 4) uses guide-structure structure-to-structure correlation values and the square Euclidean distance is used to decode those bits that have a high structure-to-structure correlation relationship and the square Euclidean distance of the ML decoder (equation 3) is used without the structure-to-structure correlation value for those bits that have a low structure-to-structure correlation relation. For example, bits that have a low structure-to-structure correlation relationship would have probabilities in the range of 0.451 to 0.550. Bits that have a probability in the range of 0 to 0.450 and 0.551 to 1.00 would be considered to have high structure-to-structure relationships. In this embodiment, the memory 442 is smaller than those bits that having a low structure-to-structure correlation relationship do not have a probability metric matrix stored in the memory. It is also envisaged that the structure-to-structure correlation metric can be used for all bits in the lattice structure, without taking into account the structure-to-structure relationship of the bits. The results obtained for a VSELP digital acoustic encoder on a noisy channel (defined as a BER channel of 8.6%) using a conventional ML Viterbi decoder had a decoded BER of 1.98% for a 100-second acoustic row. For a decoder using equation 4 of the novel bifurcation metric for each bifurcation in the lattice, the decoded BER decreases to 1.85%. In this way, the invention provides an average improvement of 7%. This is a humanly decipherable increase in the acoustic quality. The performance is further improved because the decoded bits having the strongest correlation of the guide bit are moved to the positions that have the lowest bit error rate. The effect of these bits is the most perceptible of the bits of the acoustic encoder. The inclusion of the correlation of other structure-to-structure bits in the metric benefits these particular bits because of their high structure-to-structure correlation. The improvement in the resulting acoustic quality is thus effectively much greater than 7%. This is another significant benefit of the present invention. In many digital acoustic encoder systems, a "bad structure" strategy is used to mitigate the effects of decoding errors in the acoustic quality emitted. For these strategies, the most significant acoustic bits are monitored at the output of the channel decoder. If an error is suspected in one of these bits, the acoustic structure is discarded. If such a bad structure strategy is used in a system employing the present invention, the invention will reduce the number of discarded structures because the most important bits have a high structure-to-structure correlation relationship, and significantly fewer errors will occur. in these bits compared to conventional decoding methods. A further advantage of the invention is that it does not significantly add the circuitry required to implement the decoder circuit. Many conventional decoders retain the previous structure for the situation in which the following structure is discarded. According to the above, the data of the previous structure are available for processing without adding a significant amount of circuitry in relation to the existing systems. The present invention is illustrated in a GSM cellular radiotelephone, where this is particularly advantageous. However, the invention can also advantageously be used to decode the signals communicated from low-proportion acoustic encoders and low-proportion video encoders where there is a structure-to-bit structure correlation. In this way, the invention has application in Viterbi decoding of convolutional codes, convolutional punctured codes, lattice encoded modulation, continuous phase modulation, partial response signaling systems, maximum likelihood sequence estimation, block codes and coding modulated by block. In addition to these specific Viterbi applications, the invention has application in the M algorithm and the generalized Viterbi algorithms. Although the invention is described as a probability of other bits is generated from a bit in a single previous structure, the probability of other bits can be generated from bit values in the same bit position in a plurality of bits. previous structures.
The values stored in the memory are P { j / k, h} in this modality, instead of P { j / k} . The value of P { j / k, h} is the probability of the bit that has value j if the bit value in the same position in the previous structure had value k and the bit value in the same position in the structure before that had value h. In this way, the table is larger, and the probability depends on the two previous structures. The invention, which is illustrated with acoustic decoders, could advantageously be used alternatively with any system where the signals have a high structure-to-structure correlation. According to the above, it can be seen that an improved decoder is exposed. The decoder uses the high structure-to-structure correlation of some bits to improve decoder performance. Additional improvements are made by placing the most important bits in those positions that have the lowest bit error rate.

Claims (10)

  1. NOVED / OF INVENTION Having described the present invention is considered as a novelty and therefore claimed as property described in the following claims. A decoder circuit (223) comprising: an input (325) for entering data bits; a probability circuit (432) that generates respective probabilities; and a decoder (434) coupled to the input and the probability circuit, the decoder receives the data bits and generates a sequence of output data, the decoder produces the output data sequence by selecting the bit values according to the input data and the respective probabilities generated by the probability circuit; characterized in that the probabilities are associated with the bits in at least the predetermined bit positions of the input data structures, the respective probabilities are generated on a bit-by-bit basis as a function of a one-bit value in the same bit position in a previous data structure.
  2. 2. The decoder circuit according to claim 1, further characterized in that the probability circuit includes a memory circuit (542) that stores the respective probabilities for each of the at least predetermined bit positions.
  3. 3. The decoder circuit according to claim 2, further characterized in that the memory circuit stores a plurality of probability values associated with each of the at least predetermined bit positions, wherein one of the plurality of probability values associated with each of the at least predetermined bit positions is a probability that a bit value will be repeated and another of the plurality of probability values associated with each of the at least predetermined bit positions is a probability that a bit value will change.
  4. The decoder circuit according to claim 3, further characterized in that it includes a prior structure storage circuit (545) coupled between an output of the decoder and an input of the memory circuit.
  5. 5. The decoder circuit according to claim 4, further characterized in that it includes a bifurcation metric generator (540) coupled to an output of the probability circuit and to an input of the decoder, the bifurcation metric generator outputs a bifurcation metric value to the decoder.
  6. 6. The decoder circuit according to claim 5, further characterized in that the decoder (434) is a lattice decoder.
  7. 7. The decoder circuit according to claim 6, further characterized in that the bifurcation metric generator includes an adder (552) and a square Euclidean distance circuit (650), the square Euclidean distance circuit is coupled to the adder.
  8. The decoder circuit according to claim 7, further characterized in that the bifurcation metric generator includes an a priori circuit (660) coupled to a memory circuit and to the adder, the adder adds an a priori circuit output and an output of the circuit. square Euclidean distance circuit to produce the bifurcation metric.
  9. 9. The decoder circuit according to claim 1, further characterized in that the respective probabilities are generated as a function of the bit values in the same bit position of more than one prior data structure. The decoder circuit according to claim 1, further characterized in that the decoder uses a square Euclidean distance and a probability value in order to select a path for the bits that have a high structure-to-structure correlation, and the decoder uses a square Euclidean distance in order to select a path for bits that have a low structure-to-structure correlation relationship. DECODIFIG &DOR AND METHOD FOR THE SAME Summary of the Invention A decoder (223) includes a probability circuit (432) that generates bifurcation metrics using guide bit probabilities that represent structure-to-structure correlation relationships. Branching metrics are introduced to a decoder (434) which outputs the decoded data bits as a function of the guide bit probabilities for at least the predetermined bit positions.
MX9601206A 1995-03-31 1996-03-29 Decoder and method therefor. MX9601206A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US41425895A 1995-03-31 1995-03-31
US08/414,258 1995-03-31

Publications (2)

Publication Number Publication Date
MXPA96001206A true MXPA96001206A (en) 1997-08-01
MX9601206A MX9601206A (en) 1997-08-30

Family

ID=23640649

Family Applications (1)

Application Number Title Priority Date Filing Date
MX9601206A MX9601206A (en) 1995-03-31 1996-03-29 Decoder and method therefor.

Country Status (6)

Country Link
US (1) US6215831B1 (en)
CA (1) CA2171922C (en)
DE (1) DE19612715A1 (en)
ES (1) ES2117560B1 (en)
GB (1) GB2299491B (en)
MX (1) MX9601206A (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69704862T2 (en) * 1996-06-13 2001-08-30 Nortel Networks Ltd SPACIOUS CORDLESS DISTRIBUTION SYSTEM
US6438180B1 (en) * 1997-05-09 2002-08-20 Carnegie Mellon University Soft and hard sequence detection in ISI memory channels
DE60022624T2 (en) * 1999-01-20 2006-06-22 Broadcom Corp., Irvine TRELLY DECODER WITH CORRECTION OF PAIR EXCHANGE, FOR USE IN TRANSMITTERS / RECIPIENTS FOR GIGABIT ETHERNET
US6452984B1 (en) * 1999-02-25 2002-09-17 Lsi Logic Corporation Metric biasing for maximum likelihood sequence estimators
US6758435B2 (en) * 1999-12-09 2004-07-06 Rheinmetall W & M Gmbh Guide assembly for a missile
US7010483B2 (en) * 2000-06-02 2006-03-07 Canon Kabushiki Kaisha Speech processing system
US7035790B2 (en) * 2000-06-02 2006-04-25 Canon Kabushiki Kaisha Speech processing system
US7072833B2 (en) * 2000-06-02 2006-07-04 Canon Kabushiki Kaisha Speech processing system
US6813322B2 (en) * 2001-04-26 2004-11-02 Telefonaktiebolaget L.M. Ericsson (Publ) Soft output value biasing
JP3532884B2 (en) * 2001-05-18 2004-05-31 松下電器産業株式会社 Viterbi decoder
US20030073416A1 (en) * 2001-10-17 2003-04-17 Shinichiro Ohmi Diversity apparatus and method therefor
JP4116562B2 (en) * 2001-11-29 2008-07-09 クゥアルコム・インコーポレイテッド Method and apparatus for determining log-likelihood ratio in precoding
JP3817470B2 (en) * 2001-12-04 2006-09-06 シャープ株式会社 Signal evaluation apparatus and signal evaluation method
US7154965B2 (en) 2002-10-08 2006-12-26 President And Fellows Of Harvard College Soft detection of data symbols in the presence of intersymbol interference and timing error
US7733988B2 (en) * 2005-10-28 2010-06-08 Alcatel-Lucent Usa Inc. Multiframe control channel detection for enhanced dedicated channel
US8165224B2 (en) 2007-03-22 2012-04-24 Research In Motion Limited Device and method for improved lost frame concealment
AU2007237313A1 (en) * 2007-12-03 2009-06-18 Canon Kabushiki Kaisha Improvement for error correction in distributed vdeo coding
US7895146B2 (en) * 2007-12-03 2011-02-22 Microsoft Corporation Time modulated generative probabilistic models for automated causal discovery that monitors times of packets
US8391408B2 (en) * 2008-05-06 2013-03-05 Industrial Technology Research Institute Method and apparatus for spatial mapping matrix searching
KR101442259B1 (en) * 2008-09-03 2014-09-25 엘지전자 주식회사 A method of operating a relay in a wireless communication system
WO2010027136A1 (en) * 2008-09-03 2010-03-11 Lg Electronics Inc. Realy station and method of operating the same
US9195533B1 (en) 2012-10-19 2015-11-24 Seagate Technology Llc Addressing variations in bit error rates amongst data storage segments
US8732555B2 (en) 2012-10-19 2014-05-20 Seagate Technology Llc Addressing variations in bit error rates amongst data storage segments
US10552252B2 (en) 2016-08-29 2020-02-04 Seagate Technology Llc Patterned bit in error measurement apparatus and method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4709377A (en) * 1985-03-13 1987-11-24 Paradyne Viterbi decoder for wireline modems
US4833693A (en) * 1985-11-21 1989-05-23 Codex Corporation Coded modulation system using interleaving for decision-feedback equalization
US4945549A (en) * 1986-11-13 1990-07-31 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Trellis coded modulation for transmission over fading mobile satellite channel
US4742533A (en) 1987-01-02 1988-05-03 Motorola, Inc. Soft decision digital communication apparatus
DE3910739C3 (en) * 1989-04-03 1996-11-21 Deutsche Forsch Luft Raumfahrt Method for generalizing the Viterbi algorithm and means for performing the method
JP2553743B2 (en) * 1990-07-05 1996-11-13 松下電器産業株式会社 Digital signal magnetic recording / reproducing device
US5134635A (en) * 1990-07-30 1992-07-28 Motorola, Inc. Convolutional decoder using soft-decision decoding with channel state information
US5214675A (en) 1991-07-02 1993-05-25 Motorola, Inc. System and method for calculating channel gain and noise variance of a communication channel
JP2876497B2 (en) * 1991-08-23 1999-03-31 松下電器産業株式会社 Error correction code decoding method and apparatus
DE4340387C1 (en) * 1993-11-26 1994-12-22 Siemens Ag Method for the encoded transmission of voice (speech) signals
EP0700182B1 (en) * 1994-08-31 2001-01-03 Nec Corporation Apparatus for error correcting decoding in digital data communication systems

Similar Documents

Publication Publication Date Title
MXPA96001206A (en) Decoder and method for me
US6215831B1 (en) Decoder circuit using bit-wise probability and method therefor
CN1155160C (en) Method and apparatus for transmitting and receiving
CN1101997C (en) Method and apparatus for rate determination in communication system
US6484285B1 (en) Tailbiting decoder and method
US5432822A (en) Error correcting decoder and decoding method employing reliability based erasure decision-making in cellular communication system
US6085349A (en) Method for selecting cyclic redundancy check polynomials for linear coded systems
US5838267A (en) Method and apparatus for encoding and decoding digital information
JPH07273813A (en) Method and apparatus for generating soft symbol
CN1104795C (en) Interference mitigation by joint detection of cochannel signals
US6728323B1 (en) Baseband processors, mobile terminals, base stations and methods and systems for decoding a punctured coded received signal using estimates of punctured bits
KR20000053091A (en) Soft decision output decoder for decoding convolutionally encoded codewords
CN107231158B (en) Polarization code iterative receiver, system and polarization code iterative decoding method
CA2058775C (en) Transmission and decoding of tree-encoder parameters of analogue signals
KR19980064845A (en) Coding and decoding system using seed check bits
CN108631792B (en) Method and device for encoding and decoding polarization code
EP0529909B1 (en) Error correction encoding/decoding method and apparatus therefor
CN107911195B (en) CVA-based tail-biting convolutional code channel decoding method
US6374387B1 (en) Viterbi decoding of punctured convolutional codes without real-time branch metric computation
KR20010085425A (en) Data transmission method, data transmission system, sending device and receiving device
KR20160031781A (en) Method and apparatus for decoding in a system using binary serial concatenated code
EP1821415B1 (en) Hybrid decoding using multiple turbo decoders in parallel
KR20000057712A (en) Data reception apparatus and data reception method
RU2301492C2 (en) Method and device for transmitting voice information in digital radio communication system
UA75863C2 (en) Serial viterbi decoder (variants) and a method of serial viterbi decoding